DSPRelated.com
Forums

Sample Rate conversion

Started by naebad October 2, 2006
If I am changing a sample rate by a fractional amount should I always
UPSAMPLE first and then DOWNSAMPLE?

For example to change down by 2/3 should I upsample by 2 and downsample
by 3.

To change by 3/2 should I upsample by 3 and downsample by 2?

Also for the above cases should the filter 'in the middle' have cuttoff
pi/2?

The reason I am asking is that it appears that if L=M then upsampling
followed by downsampling does not give the same result as downsampling
and then upsampling since by downsampling first we remove information
(samples( and then add zeros to upsample.The otehr wat round makes more
sense to me at least.

Thanks

Naebad

You should follow your instinct and upsample first. The filter cutoff will
depend on your downsample rate.

>If I am changing a sample rate by a fractional amount should I always >UPSAMPLE first and then DOWNSAMPLE? > >For example to change down by 2/3 should I upsample by 2 and downsample >by 3. > >To change by 3/2 should I upsample by 3 and downsample by 2? > >Also for the above cases should the filter 'in the middle' have cuttoff >pi/2? > >The reason I am asking is that it appears that if L=M then upsampling >followed by downsampling does not give the same result as downsampling >and then upsampling since by downsampling first we remove information >(samples( and then add zeros to upsample.The otehr wat round makes more >sense to me at least. > >Thanks > >Naebad > >
"naebad" <minnaebad@yahoo.co.uk> wrote in message 
news:1159828643.719067.65740@m7g2000cwm.googlegroups.com...
> If I am changing a sample rate by a fractional amount should I always > UPSAMPLE first and then DOWNSAMPLE? > > For example to change down by 2/3 should I upsample by 2 and downsample > by 3. > > To change by 3/2 should I upsample by 3 and downsample by 2? > > Also for the above cases should the filter 'in the middle' have cuttoff > pi/2? > > The reason I am asking is that it appears that if L=M then upsampling > followed by downsampling does not give the same result as downsampling > and then upsampling since by downsampling first we remove information > (samples( and then add zeros to upsample.The otehr wat round makes more > sense to me at least. >
You might think about the filtering involved. In order to upsample you need to conceptually: - increase the sample rate by adding zero-valued samples - interpolate; another word for lowpass filtering. If you upsample by a factor of 2 it's the same thing as repeating the original (repeating) spectrum twice. That is, the spectrum doesn't change at all - only your perception of it. Then you lowpass filter out the region between fs/4 and 3fs/4. Done well this has no affect except to make the zero-valued samples now be interpolated values. [fs here refers to the *new* sample rate or 2fs re the original sample rate] In order to downsample by N you need to conceptually: - lowpass filter to fs/2N - drop N-1 contiguous samples leaving a factor of 1/N samples. So, if you downsample first you will be lowpass filtering more than you may need to. Some implementations move things around so that this isn't an issue I believe. fred harris' book is a good reference. Fred
naebad wrote:
> > If I am changing a sample rate by a fractional amount should I always > UPSAMPLE first and then DOWNSAMPLE?
The Secret Rabbit Code converter can do this: http://www.mega-nerd.com/SRC/ Erik -- +-----------------------------------------------------------+ Erik de Castro Lopo +-----------------------------------------------------------+ Gambling(n): A discretionary tax on those asleep during high school maths.
naebad wrote:

> If I am changing a sample rate by a fractional amount should I always > UPSAMPLE first and then DOWNSAMPLE? > > For example to change down by 2/3 should I upsample by 2 and downsample > by 3. > > To change by 3/2 should I upsample by 3 and downsample by 2? > > Also for the above cases should the filter 'in the middle' have cuttoff > pi/2? > > The reason I am asking is that it appears that if L=M then upsampling > followed by downsampling does not give the same result as downsampling > and then upsampling since by downsampling first we remove information > (samples( and then add zeros to upsample.The otehr wat round makes more > sense to me at least. > > Thanks > > Naebad >
Conceptually, you need to upsample then downsample. Doing it the other way around low pass filters your data, as Fred pointed out. I said conceptually, because in a practical system, upsampling may put the sample rate beyond the limit of what you can reasonably compute within a sample interval. Fortunately, there are tricks that take advantage of the math so that the actual processing is not done at the upsampled rate. Look for poly-phase filter banks, which are useful for small integer ratios. Another approach is to consider that resampling is essentially figuring out an intermediate value between two of your samples that is most likely the value at the resampled instant. One can use interpolation to do that. A Farrow resampler uses parabolic estimates to get arbitrarily specified fractional samples.
Wouldn't this change the length of your data as well (as the result is a
convolution between the input stream and the filter response)?

My concern is the application of such filters in an OFDM transmitter,
where the input to the interpolation filter is a stream made up of back to
back symbols. At the output of the filter, the symbol separation (in time)
doesn't exist any more, as a result of the convolution with the filter
response.


>Done well this has no affect except to make the >zero-valued samples now be interpolated values.
On Oct 2, 3:37 pm, "naebad" <minnae...@yahoo.co.uk> wrote:
> If I am changing a sample rate by a fractional amount should I always > UPSAMPLE first and then DOWNSAMPLE? > > For example to change down by 2/3 should I upsample by 2 and downsample > by 3. > > To change by 3/2 should I upsample by 3 and downsample by 2? > > Also for the above cases should the filter 'in the middle' have cuttoff > pi/2? > > The reason I am asking is that it appears that if L=M then upsampling > followed by downsampling does not give the same result as downsampling > and then upsampling since by downsampling first we remove information > (samples( and then add zeros to upsample.The otehr wat round makes more > sense to me at least.
If you are using IIR's to filter and don't want to lose information, then upsample first, then downsample. If you can afford to lose information (no high frequencies, or not interested in high frequencies), then you might be able to use the opposite order depending on the ratios and highest frequecy you want to keep, producing some computational efficiencies (lower peak data rates). If you are using a FIR type filter, then you can just interpolate the new samples directly for any ratio, either calculating your filter taps on-the-fly, or by using table lookup or Farrow approximations for your filter function. No need to do two passes and calculate un-needed data points. IMHO. YMMV. -- rhn A.T nicholson d.0.t C-o-M
"cocioc" <andrei.td@gmail.com> wrote in message 
news:qpudnX9iT_XtAKrYnZ2dnUVZ_t2dnZ2d@giganews.com...
> Wouldn't this change the length of your data as well (as the result is a > convolution between the input stream and the filter response)? > > My concern is the application of such filters in an OFDM transmitter, > where the input to the interpolation filter is a stream made up of back to > back symbols. At the output of the filter, the symbol separation (in time) > doesn't exist any more, as a result of the convolution with the filter > response. > > >>Done well this has no affect except to make the >>zero-valued samples now be interpolated values.
Sure. But why are you concerned? I'm not sure what you mean by "such filters". All filters have a finite transient response. Whether you view the output of the filter as "your data" seems to me to depend on whether you are filtering a single block of data or filtering continuously. One could say that not all of the filter output represents "your data" if you're processing a block. One could also say that the filter output represents "your data" if you're processing a never-ending stream. Certainly one needs to consider the impact on intersymbol interference. Fred
Hi Fred,

My application is an OFDM transmitter. The output of the IFFT needs to be
oversampled by a factor of 4 (after adding the CP). I thought of doing the
oversampling by interpolating (inserting "zeroes") and low pass filtering
the output of the interpolator to remove the unwanted images. I don't
think this is going to work, though, because the LPF will "smudge" the
adjacent symbols, destroying the time domain separation between them,
which is required by the receiver.

One other method for implementing the oversampling is to use an IFFT 4
times as large as what I need, but that doesn't work for me, because I
don't have access to the IFFT, only to its output.

Does anyone know what the standard method is for oversampling an OFDM
signal? The unwanted images need to be removed, so simple zero-insertion
without low pass filtering is not going to work.

Regards,

Andrei


> >Sure. But why are you concerned? > >I'm not sure what you mean by "such filters". All filters have a finite
>transient response. Whether you view the output of the filter as "your >data" seems to me to depend on whether you are filtering a single block
of
>data or filtering continuously. One could say that not all of the filter
>output represents "your data" if you're processing a block. One could
also
>say that the filter output represents "your data" if you're processing a
>never-ending stream. > >Certainly one needs to consider the impact on intersymbol interference. > >Fred > > >
cocioc wrote:
> Hi Fred, > > My application is an OFDM transmitter. The output of the IFFT needs to be > oversampled by a factor of 4 (after adding the CP). I thought of doing the > oversampling by interpolating (inserting "zeroes") and low pass filtering > the output of the interpolator to remove the unwanted images. I don't > think this is going to work, though, because the LPF will "smudge" the > adjacent symbols, destroying the time domain separation between them, > which is required by the receiver. > > One other method for implementing the oversampling is to use an IFFT 4 > times as large as what I need, but that doesn't work for me, because I > don't have access to the IFFT, only to its output. > > Does anyone know what the standard method is for oversampling an OFDM > signal? The unwanted images need to be removed, so simple zero-insertion > without low pass filtering is not going to work. > > Regards, > > Andrei > > > >>Sure. But why are you concerned? >> >>I'm not sure what you mean by "such filters". All filters have a finite > > >>transient response. Whether you view the output of the filter as "your >>data" seems to me to depend on whether you are filtering a single block > > of > >>data or filtering continuously. One could say that not all of the filter > > >>output represents "your data" if you're processing a block. One could > > also > >>say that the filter output represents "your data" if you're processing a > > >>never-ending stream. >> >>Certainly one needs to consider the impact on intersymbol interference. >> >>Fred >> >> >> > > >
You should be able to use a Farrow resampler for this. If the sample times line up with the original samples. They are not changed.