DSPRelated.com
Forums

Does interpolating/decimating affect Bandwidth?

Started by Chriss January 21, 2013
I'm taking a DSP class at school and I had a question on what happens to
the bandwidth as we interpolate/decimate the signal.

Please correct me if anything I say is wrong. From my understanding, the
term bandwidth in signal processing is a width of the range of frequencies
that a signal uses (measured in Hz). In other words, it's the difference
between the highest frequency component and the lowest frequency
component.

Now, by doing interpolation/decimation we increase/decrease our sampling
rate. For example, interpolation works by upsampling (inserting zeros
between the original samples) the signal first, but that creates unwanted
replicas at multiples of the original sampling rate. So then we low-pass
filter it to remove those replicas. If we look at it in the frequency
domain, after interpolation we have the scaled version of our original
signal. So, our fmax becomes fmax/L (where L is the upsampling factor). 

My question is, are we changing the bandwidth of the signal by changing the
sampling rate? The answer seems to be no, but I can't figure out why.

Thanks,
Chris


On 1/21/13 4:25 PM, Chriss wrote:
> > My question is, are we changing the bandwidth of the signal by changing the > sampling rate? The answer seems to be no, but I can't figure out why. >
certainly downsampling (decimation) reduces the available bandwidth. if there is information at frequencies above the new Nyquist, that information is either lost or aliased (which also loses it). upsampling increases the *available* bandwidth, but does not in itself, create new information to fill that new available bandwidth. so proper upsampling will have some empty spectrum resulting (improper upsampling will have images but those images are not new information). -- r b-j rbj@audioimagination.com "Imagination is more important than knowledge."
On Mon, 21 Jan 2013 15:25:55 -0600, Chriss wrote:

> I'm taking a DSP class at school and I had a question on what happens to > the bandwidth as we interpolate/decimate the signal. > > Please correct me if anything I say is wrong. From my understanding, the > term bandwidth in signal processing is a width of the range of > frequencies that a signal uses (measured in Hz). In other words, it's > the difference between the highest frequency component and the lowest > frequency component.
More accurately, it's the difference between the highest and lowest _significant_ frequency components -- but your statement is pretty close.
> Now, by doing interpolation/decimation we increase/decrease our sampling > rate. For example, interpolation works by upsampling (inserting zeros > between the original samples) the signal first, but that creates > unwanted replicas at multiples of the original sampling rate. So then we > low-pass filter it to remove those replicas. If we look at it in the > frequency domain, after interpolation we have the scaled version of our > original signal. So, our fmax becomes fmax/L (where L is the upsampling > factor).
Interpolation (or decimation) does not scale the spectrum of the signal in real terms. it duplicates it (interpolation), or aliases any of it that's outside of the Nyquist limit (decimation), but it does not change the baseband portion of the signal. You may be thinking of what happens to the spectrum of the signal with respect to the sampling rate: if you interpolate by a factor of four, then a signal component that used to be sitting at (sampling rate) / 8 will now be at (sampling rate) / 32 -- but that is because the sampling rate itself has increased by a factor of four.
> My question is, are we changing the bandwidth of the signal by changing > the sampling rate? The answer seems to be no, but I can't figure out > why.
No, you are not. You probably cannot figure out why because you believed that interpolation scaled the frequency. HTH. -- My liberal friends think I'm a conservative kook. My conservative friends think I'm a liberal kook. Why am I not happy that they have found common ground? Tim Wescott, Communications, Control, Circuits & Software http://www.wescottdesign.com
Thanks for the responses!

I still feel like I'm a little confused. Please refer to the image below
that shows the upsampling process. In this case, L=2. Then, a low-pass
filter with a cutoff frequency at pi/L is used to filter out the signal
centered at 2pi/L and multiples of that. So, now if we compared the
original signal, which contained frequencies from -pi to pi, the
interpolated signal is now located at -pi/2 to pi/2. So interpolation
caused our original signal to get shrunk (similarly decimation would cause
to get bigger or wider). It seems to me, we lost all the information that
was present from pi/2 to pi in the original signal. 

Doesn't this cause the bandwidth to change as well then?
I'm sorry if I'm completely missing the point, I'm just trying to
understand the basics.

http://imageshack.us/photo/my-images/687/upsamplingfreqdomain.gif/

Thanks again for everything,
Chris 


>On Mon, 21 Jan 2013 15:25:55 -0600, Chriss wrote: > >> I'm taking a DSP class at school and I had a question on what happens
to
>> the bandwidth as we interpolate/decimate the signal. >> >> Please correct me if anything I say is wrong. From my understanding,
the
>> term bandwidth in signal processing is a width of the range of >> frequencies that a signal uses (measured in Hz). In other words, it's >> the difference between the highest frequency component and the lowest >> frequency component. > >More accurately, it's the difference between the highest and lowest >_significant_ frequency components -- but your statement is pretty close. > >> Now, by doing interpolation/decimation we increase/decrease our
sampling
>> rate. For example, interpolation works by upsampling (inserting zeros >> between the original samples) the signal first, but that creates >> unwanted replicas at multiples of the original sampling rate. So then
we
>> low-pass filter it to remove those replicas. If we look at it in the >> frequency domain, after interpolation we have the scaled version of our >> original signal. So, our fmax becomes fmax/L (where L is the upsampling >> factor). > >Interpolation (or decimation) does not scale the spectrum of the signal >in real terms. it duplicates it (interpolation), or aliases any of it >that's outside of the Nyquist limit (decimation), but it does not change >the baseband portion of the signal. > >You may be thinking of what happens to the spectrum of the signal with >respect to the sampling rate: if you interpolate by a factor of four, >then a signal component that used to be sitting at (sampling rate) / 8 >will now be at (sampling rate) / 32 -- but that is because the sampling >rate itself has increased by a factor of four. > >> My question is, are we changing the bandwidth of the signal by changing >> the sampling rate? The answer seems to be no, but I can't figure out >> why. > >No, you are not. You probably cannot figure out why because you believed
>that interpolation scaled the frequency. > >HTH. > >-- >My liberal friends think I'm a conservative kook. >My conservative friends think I'm a liberal kook. >Why am I not happy that they have found common ground? > >Tim Wescott, Communications, Control, Circuits & Software >http://www.wescottdesign.com >
On Mon, 21 Jan 2013 17:21:55 -0600, Chriss wrote:
(Top posting fixed)
> >>On Mon, 21 Jan 2013 15:25:55 -0600, Chriss wrote: >> >>> I'm taking a DSP class at school and I had a question on what happens > to >>> the bandwidth as we interpolate/decimate the signal. >>> >>> Please correct me if anything I say is wrong. From my understanding, > the >>> term bandwidth in signal processing is a width of the range of >>> frequencies that a signal uses (measured in Hz). In other words, it's >>> the difference between the highest frequency component and the lowest >>> frequency component. >> >>More accurately, it's the difference between the highest and lowest >>_significant_ frequency components -- but your statement is pretty >>close. >> >>> Now, by doing interpolation/decimation we increase/decrease our > sampling >>> rate. For example, interpolation works by upsampling (inserting zeros >>> between the original samples) the signal first, but that creates >>> unwanted replicas at multiples of the original sampling rate. So then > we >>> low-pass filter it to remove those replicas. If we look at it in the >>> frequency domain, after interpolation we have the scaled version of >>> our original signal. So, our fmax becomes fmax/L (where L is the >>> upsampling factor). >> >>Interpolation (or decimation) does not scale the spectrum of the signal >>in real terms. it duplicates it (interpolation), or aliases any of it >>that's outside of the Nyquist limit (decimation), but it does not change >>the baseband portion of the signal. >> >>You may be thinking of what happens to the spectrum of the signal with >>respect to the sampling rate: if you interpolate by a factor of four, >>then a signal component that used to be sitting at (sampling rate) / 8 >>will now be at (sampling rate) / 32 -- but that is because the sampling >>rate itself has increased by a factor of four. >> >>> My question is, are we changing the bandwidth of the signal by >>> changing the sampling rate? The answer seems to be no, but I can't >>> figure out why. >> >>No, you are not. You probably cannot figure out why because you >>believed > >>that interpolation scaled the frequency. >> >>HTH. >> > Thanks for the responses! > > I still feel like I'm a little confused. Please refer to the image below > that shows the upsampling process. In this case, L=2. Then, a low-pass > filter with a cutoff frequency at pi/L is used to filter out the signal > centered at 2pi/L and multiples of that. So, now if we compared the > original signal, which contained frequencies from -pi to pi, the > interpolated signal is now located at -pi/2 to pi/2. So interpolation > caused our original signal to get shrunk (similarly decimation would > cause to get bigger or wider). It seems to me, we lost all the > information that was present from pi/2 to pi in the original signal. > > Doesn't this cause the bandwidth to change as well then? I'm sorry if > I'm completely missing the point, I'm just trying to understand the > basics. > > http://imageshack.us/photo/my-images/687/upsamplingfreqdomain.gif/
Look at the labels on the x axes of those two graphs. They are normalized to the sampling rate -- this is why the top one refers to a sampling rate T, and the bottom to a sampling rate T' (T' equals T/L: trust me, it has to for those figures to work). This normalization makes sense once you're playing around in sampled time, because _in sampled time_ a frequency of pi/2, or pi, or whatever, always means the same thing. It can make life a bit confusing if you're trying to think of things in real-world terms like Hz or seconds -- but doing work in sampled time always generates a bit of that sort of confusion, which you have to remember when you're doing your book-keeping. If you're getting that out of a text book, I suggest you go back and carefully double-check how they are defining T' -- I think that'll clear things up. -- My liberal friends think I'm a conservative kook. My conservative friends think I'm a liberal kook. Why am I not happy that they have found common ground? Tim Wescott, Communications, Control, Circuits & Software http://www.wescottdesign.com
On Mon, 21 Jan 2013 17:21:55 -0600, "Chriss" <62698@dsprelated> wrote:

>Thanks for the responses! > >I still feel like I'm a little confused. Please refer to the image below >that shows the upsampling process. In this case, L=2. Then, a low-pass >filter with a cutoff frequency at pi/L is used to filter out the signal >centered at 2pi/L and multiples of that. So, now if we compared the >original signal, which contained frequencies from -pi to pi, the >interpolated signal is now located at -pi/2 to pi/2. So interpolation >caused our original signal to get shrunk (similarly decimation would cause >to get bigger or wider). It seems to me, we lost all the information that >was present from pi/2 to pi in the original signal. > >Doesn't this cause the bandwidth to change as well then? >I'm sorry if I'm completely missing the point, I'm just trying to >understand the basics. > >http://imageshack.us/photo/my-images/687/upsamplingfreqdomain.gif/ > >Thanks again for everything, >Chris
Hello Chris, It's very common, when first learning DSP, to be a bit confused by the different frequency axis notations used by different authors of books, articles, application notes, blogs, etc. So don't feel bad. I remember being confused the first time I saw a spectral plot where the time-domain data's sample rate, a point on the freq axis, was labeled as '2pi'. I thought, "How the heck can that make sense??? 2pi is a constant!!!" Chris, I can think of four common (popular) ways to label the freq axis in DSP (sampled data) discussions. They are: 1) freq measured in Hz (cycles/second) 2) freq measured in radians/second 3) freq measured in radians/sample 4) freq measured as a factor of the Fs sample rate Freq notations 1) and 2) can be used when talking about analog signals, but freq notations 2) and 3) only make sense for sampled signals because they are intimately related to the sample rate of a sampled signal. Let's say we're talking about the spectrum of a time-domain discrete sequence whose sample rate is Fs = 1000 Hz. In this situation the above four different freq-axis notations are related to each other as follows: 0 Hz 250 Hz 500 Hz 1000 Hz 0 rad/sec 500pi rad/sec 1000pi rad/sec 2000pi rad/sec 0 rad/samp pi/2 rad/samp pi rad/samp 2pi rad/samp 0*Fs Fs/4 Fs/2 Fs Now, looking at the top spectral plot your above URL points to, we see spectral replications so we know that plot is a spectrum of a discrete-time sequence (NOT a continuous-time analog signal). Let's assume the Fs sample rate of the discrete-time sequence was 1000 Hz. In that case the top spectral plot shows that the signal has spectral energy from -500 Hz -to- +500 Hz. We can state that your signal has a "one-sided" bandwidth of 500 Hz. Now if you create a new time-domain sequence by stuffing a zero-valued sample in between each of the original time sequence's samples (upsample by L = 2), your new sequence's spectrum will be that shown in the bottom spectral plot. Regarding that new upsampled sequence, we can say it: * has a sample rate of Fs = 2000 Hz (2pi rad/sample), * has the original spectral energy centered at zero Hz and spectral replications at multiples of the Fs = 2000 Hz sample rate (multiples of 2pi rad/sample), * it has a "spectral image" centered at 1000 Hz (2pi/L = pi rad/sample). If you apply that new sequence to the input of a digital lowpass filter whose cutoff frequency is 500 Hz (pi/4 rad/sample) the filter's output sequence will not contain a "spectral image" centered at 1000 Hz (pi rad/sample). The filter's output sequence will contain the original spectral energy centered at zero Hz and spectral replications at multiples of the Fs = 2000 Hz sample rate (multiples of 2pi rad/sample). So what's it all mean? It means that when using freq notation 1) the original sequence had a "one-sided" bandwidth of 500 Hz, and the filter's output sequence also has a "one-sided" bandwidth of 500 Hz. So when using freq notation 1) there was NO change in bandwidth. However, when using freq notation 3), a notation that's related to the sample rate, there was a reduction in bandwidth by a factor of two. That is, the original signal had a one-sided bandwidth of pi rad/sample and the interpolated (filter output) signal has a one-sided bandwidth of pi/2 rad/sample. So when someone asks you, "Does interpolation by two cause a change in bandwidth?", you should say, "It depends on which freq axis notation we're using." Hope that helps. [-Rick-]
>I'm taking a DSP class at school and I had a question on what happens to >the bandwidth as we interpolate/decimate the signal. > >Please correct me if anything I say is wrong. From my understanding, the >term bandwidth in signal processing is a width of the range of
frequencies
>that a signal uses (measured in Hz). In other words, it's the difference >between the highest frequency component and the lowest frequency >component. > >Now, by doing interpolation/decimation we increase/decrease our sampling >rate. For example, interpolation works by upsampling (inserting zeros >between the original samples) the signal first, but that creates unwanted >replicas at multiples of the original sampling rate. So then we low-pass >filter it to remove those replicas. If we look at it in the frequency >domain, after interpolation we have the scaled version of our original >signal. So, our fmax becomes fmax/L (where L is the upsampling factor). > >My question is, are we changing the bandwidth of the signal by changing
the
>sampling rate? The answer seems to be no, but I can't figure out why. > >Thanks, >Chris > > >
Interpolating a signal(after being acquired at ADC from real world) is meant to keep the signal bandwidth and as such is a recreation of new signal samples as if ADC was oversampling to start with. It is done by regularly inserting new samples derived from given trend of the signal. The method of zero insertion followed by filter is just an implementation of interpolation. Thus it is meant to have same bandwidth as original signal although it is never ideal (ideal one is that at ADC or else you need infinite length of signal vector). The images residue left over from filtering is a measure of how the new samples deviate from the signal trend. On the other hand if you are sampling digital values from a given LUT then the bandwidth will depend on sampling frequency and will shrink or expand proportionally. This is not what you are supposed to do with real world signals as they are meant to have their own sampling rate. But it can be useful in some cases e.g. a one cycle sine table of say 100 samples can generate various tones by changing sampling rate on same table or by using one sampling rate but picking up some samples (out of 100) regularly. Kadhiem
On Monday, 21 January 2013 21:25:55 UTC, Chriss  wrote:
> I'm taking a DSP class at school and I had a question on what happens to > > the bandwidth as we interpolate/decimate the signal. > > > > Please correct me if anything I say is wrong. From my understanding, the > > term bandwidth in signal processing is a width of the range of frequencies > > that a signal uses (measured in Hz). In other words, it's the difference > > between the highest frequency component and the lowest frequency > > component. > > > > Now, by doing interpolation/decimation we increase/decrease our sampling > > rate. For example, interpolation works by upsampling (inserting zeros > > between the original samples) the signal first, but that creates unwanted > > replicas at multiples of the original sampling rate. So then we low-pass > > filter it to remove those replicas. If we look at it in the frequency > > domain, after interpolation we have the scaled version of our original > > signal. So, our fmax becomes fmax/L (where L is the upsampling factor). > > > > My question is, are we changing the bandwidth of the signal by changing the > > sampling rate? The answer seems to be no, but I can't figure out why.
The replies already given are helpful. It can be confusing though because of different ways of defining 'decimation' and 'interpolation' - depending on whether these include filtering (which can affect bandwidth). For example Matlab and Octave define 'decimation' as reducing the number of samples including a low-pass filter that is applied before the sample rate reduction but during the 'decimation' operation - such a filter does change the bandwidth. For Matlab and Octave, the term 'down-sample' is defined to mean reducing the sample rate without low-pass filtering. Similarly for matlab and Octave, 'interpolation' includes filtering after inserting zeroes, which again does change the bandwidth - the term 'upsampling' being defined as increasing the sample rate without filtering. I, and some others, use the term 'decimation' to refer to simple elimination of samples and 'interpolation' to refer to insertion of zeroes between samples, which I think is an older (pre-Matlab) way of using the term. If you do not include a filter in the process then the bandwidth is not changed, although aliasing may make it difficult to determine what the signal's spectrum was.
> > > > Thanks, > > Chris
>On Monday, 21 January 2013 21:25:55 UTC, Chriss wrote: >> I'm taking a DSP class at school and I had a question on what happens
to
>>=20 >> the bandwidth as we interpolate/decimate the signal. >>=20 >>=20 >>=20 >> Please correct me if anything I say is wrong. From my understanding,
the
>>=20 >> term bandwidth in signal processing is a width of the range of
frequencie=
>s >>=20 >> that a signal uses (measured in Hz). In other words, it's the
difference
>>=20 >> between the highest frequency component and the lowest frequency >>=20 >> component. >>=20 >>=20 >>=20 >> Now, by doing interpolation/decimation we increase/decrease our
sampling
>>=20 >> rate. For example, interpolation works by upsampling (inserting zeros >>=20 >> between the original samples) the signal first, but that creates
unwanted
>>=20 >> replicas at multiples of the original sampling rate. So then we
low-pass
>>=20 >> filter it to remove those replicas. If we look at it in the frequency >>=20 >> domain, after interpolation we have the scaled version of our original >>=20 >> signal. So, our fmax becomes fmax/L (where L is the upsampling
factor).=
>=20 >>=20 >>=20 >>=20 >> My question is, are we changing the bandwidth of the signal by changing
t=
>he >>=20 >> sampling rate? The answer seems to be no, but I can't figure out why. > >The replies already given are helpful. It can be confusing though because
o=
>f different ways of defining 'decimation' and 'interpolation' - depending
o=
>n whether these include filtering (which can affect bandwidth). For
example=
> Matlab and Octave define 'decimation' as reducing the number of samples
in=
>cluding a low-pass filter that is applied before the sample rate reduction
=
>but during the 'decimation' operation - such a filter does change the
bandw=
>idth. For Matlab and Octave, the term 'down-sample' is defined to mean
redu=
>cing the sample rate without low-pass filtering. Similarly for matlab and
O=
>ctave, 'interpolation' includes filtering after inserting zeroes, which
aga=
>in does change the bandwidth - the term 'upsampling' being defined as
incre=
>asing the sample rate without filtering. I, and some others, use the term
'=
>decimation' to refer to simple elimination of samples and 'interpolation'
t=
>o refer to insertion of zeroes between samples, which I think is an older
(=
>pre-Matlab) way of using the term. If you do not include a filter in the
pr=
>ocess then the bandwidth is not changed, although aliasing may make it
diff=
>icult to determine what the signal's spectrum was. >>=20 >>=20 >>=20 >> Thanks, >>=20 >> Chris > >
interpolation literally is just what it says: get new sample points based on signal trend. extrapolate: get samples even further off according to signal trend. interpolation may also target getting samples from specific points given a known function. Interpolation is used to upsample a signal or it may just be used to do fractional delay without upsampling or to get new points. using a filter to interpolate is purely for the purpose of removing the copies. The filter must pass the signal itself and is not meant to filter any part of the signal itself. decimation is also meant to keep signal bandwidth. It does not need any filter if no aliasing is expected. For example if you have a signal at 50Msps sampling rate that you know has a bandwidth 0~10MHz and you want to decimate it down to 25Msps then just discard 1 sample every 2 samples. The signal will still stay at 10MHz bandwidth. Any noise outside 12.5MHz will alias but for a clean signal you can ignore this noise. If however your signal has noise then you must use a filter. Here the purpose of filter is to filter off any power beyond 12.5MHz but pass your signal(thus cutoff at 10Mhz sharply towards 12.5MHz). decimation is used to downsample but can also be used for fractional delay without downsampling. For example, you interpolate a signal 10 times then decimate 10 times but start from 4th sample to get 3/10 delay. So in short, the filter is there to pass your signal in both cases of upsampling or downsampling. The cutoff is designed for images in case of upsampling and as antialias for case of downsampling. Kadhiem