DSPRelated.com
Forums

basic question on sinc interpolation for sampled sequence

Started by Nasser M. Abbasi February 14, 2010
Hello;

I wrote a small program to implement the sinc interpolation formula as shown 
in many places such as here

http://en.wikipedia.org/wiki/Whittaker%E2%80%93Shannon_interpolation_formula

This is just to help me learn things.

It seems to work ok, but I am still not sure about one thing, so I thought 
I'll ask here. I'll give an example, and make it very simple.

I use x(t)=cos(t) function as my original signal to test with, lets say its 
frequency is F (hz).

I sample it with increasing Fs all the way to >2F. I can see that when I get 
to Fs, the sinc interpolation does recover the original signal x(t) exactly, 
but once I go over Fs the match gets a little worst again, then it gets back 
to exact match as Fs go higher to say around 4F. May be this is just 
numerical stuff due to where the last sample ends each time.

Any way, back to my question.

Suppose I obtain say n samples from x(t).  The first sample is assumed to be 
at t=0, i.e. sample 1, which x[1]=cos(0), the second sample x[2]=cos(T), and 
the 3rd sample is cos(2T), and the nth sample is cos((n-1)T).

Now the sum shown in the formula from -infinity to +infinity. Ofcourse I 
have only n samples.

So, since this formula assumed the signal being sampled is periodic, (that 
is after all how this formula same about), my question is, can I do the 
summation from -nT all the way to (n-1)*T?

i.e. I assumed the sequence x[n] is periodic (since it samples a periodic 
signal x(t) with Fs>2F), and then I duplicate it. I took x[n], and copied it 
around the origin so as to make the summation happy.    i.e. I have now, for 
say n=3 the following

   x[1] x[2] x[3]  x[1] x[2] x[3]
                        ^
                        |
                       t=0 here


The other option is just to do the sum from n=1 to the number of samples. I 
get very slightly different results from both cases. in one case the sinc 
function will be shifted to the right side only instead of both sides.

ps. I tried to also implement this using direct convolution (i.e I sampled 
x(t) and obtained x[n], then I sampled a sinc function for say M zero 
crossings on each side, using a large sampling frequency, larger than the 
one used to sample the sequence x(t) itself, then I ended up with 2 
sequences, and then used the linear convolution function I have. But I need 
to figure what to do with the result and how to map it to the correct time 
frame, so I am back to using the sinc interpolation for now.

So, my question is: Do the sum from 1 to n only or duplicate the sequence 
before applying the sinc interpolation?

Thanks,
--Nasser


On 14 Feb, 07:48, "Nasser M. Abbasi" <n...@12000.org> wrote:

> So, since this formula assumed the signal being sampled is periodic, (that > is after all how this formula same about),
Could you justify both those claims, please? That the signal is periodic and that this property is somehow necessary or essential for the analysis? Rune
"Rune Allnor" <allnor@tele.ntnu.no> wrote in message 
news:a117e885-bc9a-4bdc-91a9-6efdf4a8a813@j31g2000yqa.googlegroups.com...
> On 14 Feb, 07:48, "Nasser M. Abbasi" <n...@12000.org> wrote: > >> So, since this formula assumed the signal being sampled is periodic, >> (that >> is after all how this formula same about), > > Could you justify both those claims, please? > That the signal is periodic and that this property > is somehow necessary or essential for the analysis? > > Rune
hi Rune, May be what I said was not too clear. What I meant is this: The derivation of the sinc interpolation formula starts by taking the Fourier transform X(w) of x(t), then it finds the relation of X(w) to the discrete Fourier transform of x[n], which is obtained from x(t) by sampling. right? This is how it is derived in my textbook (DSP by oppenheim, 1975, page 29). But when we are given some x(t) of limited time frame, say t=0 to t=t0, and we want the Fourier transform of x(t), then we assume this signal is periodic of period 0..t0 and then let the period go to infinity to obtain the Fourier transform of x(t) from the fourier coefficients. Right? This is how my textbook finds the Fourier transform of a signal that is defined over some limited time duration. I mean x(t) might not be periodic ofcourse, ie. it can be aperiodic. but we have limited time view of it. So, what I meant to say is that x(t) is *assumed* to be periodic over the length of the time record. Again, this is just to be able to talk about the Fourier transform for it. May be you do not think this is the correct way to look at it. But my main question is on the implementation of the sum of the sinc formula. Should I sum one side only, from k=0 on, or need to do both sides somehow? btw, here is one mention of this "periodic" bit on the net: http://www.springerlink.com/content/f034753237h18g77/ "Abstract Sinc-interpolation is a very efficient infinitely differentiable approximation scheme from equidistant data on the infinite line. It, however, requires that the interpolated function decreases rapidly or is periodic. We give an error formula for the case where neither of these conditions is satisfied." Thanks, --Nasser
On 14 Feb, 09:57, "Nasser M. Abbasi" <n...@12000.org> wrote:
> "Rune Allnor" <all...@tele.ntnu.no> wrote in message > > news:a117e885-bc9a-4bdc-91a9-6efdf4a8a813@j31g2000yqa.googlegroups.com... > > > On 14 Feb, 07:48, "Nasser M. Abbasi" <n...@12000.org> wrote: > > >> So, since this formula assumed the signal being sampled is periodic, > >> (that > >> is after all how this formula same about), > > > Could you justify both those claims, please? > > That the signal is periodic and that this property > > is somehow necessary or essential for the analysis? > > > Rune > > hi Rune, > > May be what I said was not too clear. What I meant is this: > > The derivation of the sinc interpolation formula starts by taking the > Fourier transform X(w) of x(t), then it finds the relation of X(w) to the > discrete Fourier transform of x[n], which is obtained from x(t) by sampling. > right? This is how it is derived in my textbook (DSP by oppenheim, 1975, > page 29).
Sure.
> But when we are given some x(t) of limited time frame, say t=0 to t=t0, and > we want the Fourier transform of x(t), then we assume this signal is > periodic of period 0..t0 and then let the period go to infinity to obtain > the Fourier transform of x(t) from the fourier coefficients. Right?
Wrong. What one needs to understand is that there are several variations of the Fourier transform around, as well as some more "Fourier- like" transforms. The four basic forms of the FT are classified in terms of whether the signal domain is 1) Countinous-time (CT) or discrete-time (DT) 2) The signal supposrt is of finite (F) or infinite extent (I). From these classes alone, you have four transforms: 1) CT-I - Fourier transform 2) CT-F - Fourier series 3) DT-I - Discrete-Time Fourier Transform, DTFT 4) DT-F - Discrete Fourier Transform, DFT. For all these variants the requirement is that the sum or integral of the squared time signal over the signal support is finite (i.e. finite energy signals) for the FT to exist: sum |x[n]|^2 < infinite for the DT variants and integral |x(t)|^2 dt < infinite for the CT variants. One can also modify the FT to allow for finite *power* signals, (view with fixed-width font) 1 N lim ---- sum |x[n]|^2 < inf N-> inf 2N-1 n=-N (and a similar relation for the CT-I variant), i.e. the infinite- duration signals have finite power. The problem is that these different variants were not recognized at foirst, so some early authors, who wanted to compute the DT-I coefficients by means of the FFT, which computes DT-F coefficients, started messing around with sloppy remarks about the DFT being governed by assumptions about signal periodicity. Since this blunder somehow lipped through peer reviews - no one seem to have been aware of the relevance and importance of the details at the time - and got published in one of the leading journals, the myths and superstitions about periodic properties of data have remained to this day.
> This is > how my textbook finds the Fourier transform of a signal that is defined over > some limited time duration.
Too bad for you. But unfortunately, teher aren't too many textbooks around that do these things properly.
> I mean x(t) might not be periodic ofcourse, ie. it can be aperiodic. but we > have limited time view of it.
Sure. Which mean we do not know anything about the properties of the function outside the time gate. Now, a complicating factor is that the finite-domain *basis* *functions* are periodic, so for the purposes of arithmetics, it makes a lot of sense to review the signal as peridic outside the observation interval. This is, however, a consequence of the choise of basis functions and *not* a property of the signal as such.
> So, what I meant to say is that x(t) is *assumed* to be periodic over the > length of the time record. Again, this is just to be able to talk about the > Fourier transform for it. May be you do not think this is the correct way to > look at it.
It depends on the context. Circular convolution is a fact. Unless you take certain precautions, the result of a convolution implemented in spectrum domain as multiplications of DFTs of signals *behaves* *as* *if* the signals were periodic. This does *not* mean that the signals actually *are* periodic, but is merely a computational artefact caused by the peridoic properties of the exponential basis functions. Rune
Rune Allnor wrote:
> On 14 Feb, 09:57, "Nasser M. Abbasi" <n...@12000.org> wrote: >> "Rune Allnor" <all...@tele.ntnu.no> wrote in message >> >> news:a117e885-bc9a-4bdc-91a9-6efdf4a8a813@j31g2000yqa.googlegroups.com... >> >>> On 14 Feb, 07:48, "Nasser M. Abbasi" <n...@12000.org> wrote: >>>> So, since this formula assumed the signal being sampled is periodic, >>>> (that >>>> is after all how this formula same about), >>> Could you justify both those claims, please? >>> That the signal is periodic and that this property >>> is somehow necessary or essential for the analysis? >>> Rune >> hi Rune, >> >> May be what I said was not too clear. What I meant is this: >> >> The derivation of the sinc interpolation formula starts by taking the >> Fourier transform X(w) of x(t), then it finds the relation of X(w) to the >> discrete Fourier transform of x[n], which is obtained from x(t) by sampling. >> right? This is how it is derived in my textbook (DSP by oppenheim, 1975, >> page 29). > > Sure. > >> But when we are given some x(t) of limited time frame, say t=0 to t=t0, and >> we want the Fourier transform of x(t), then we assume this signal is >> periodic of period 0..t0 and then let the period go to infinity to obtain >> the Fourier transform of x(t) from the fourier coefficients. Right? > > Wrong. > > What one needs to understand is that there are several variations > of the Fourier transform around, as well as some more "Fourier- > like" transforms. The four basic forms of the FT are classified > in terms of whether the signal domain is > > 1) Countinous-time (CT) or discrete-time (DT) > 2) The signal supposrt is of finite (F) or infinite extent (I). > > From these classes alone, you have four transforms: > > 1) CT-I - Fourier transform > 2) CT-F - Fourier series > 3) DT-I - Discrete-Time Fourier Transform, DTFT > 4) DT-F - Discrete Fourier Transform, DFT. > > For all these variants the requirement is that the sum or > integral of the squared time signal over the signal support > is finite (i.e. finite energy signals) for the FT to exist: > > sum |x[n]|^2 < infinite > > for the DT variants and > > integral |x(t)|^2 dt < infinite > > for the CT variants. > > One can also modify the FT to allow for finite *power* signals, > (view with fixed-width font) > > 1 N > lim ---- sum |x[n]|^2 < inf > N-> inf 2N-1 n=-N > > (and a similar relation for the CT-I variant), i.e. the infinite- > duration signals have finite power. > > The problem is that these different variants were not recognized > at foirst, so some early authors, who wanted to compute the DT-I > coefficients by means of the FFT, which computes DT-F coefficients, > started messing around with sloppy remarks about the DFT being > governed by assumptions about signal periodicity. > > Since this blunder somehow lipped through peer reviews - no one > seem to have been aware of the relevance and importance of the > details at the time - and got published in one of the leading > journals, the myths and superstitions about periodic properties > of data have remained to this day. > >> This is >> how my textbook finds the Fourier transform of a signal that is defined over >> some limited time duration. > > Too bad for you. But unfortunately, teher aren't too many > textbooks around that do these things properly. > >> I mean x(t) might not be periodic ofcourse, ie. it can be aperiodic. but we >> have limited time view of it. > > Sure. Which mean we do not know anything about the properties > of the function outside the time gate. > > Now, a complicating factor is that the finite-domain *basis* > *functions* are periodic, so for the purposes of arithmetics, > it makes a lot of sense to review the signal as peridic outside > the observation interval. > > This is, however, a consequence of the choise of basis functions > and *not* a property of the signal as such. > >> So, what I meant to say is that x(t) is *assumed* to be periodic over the >> length of the time record. Again, this is just to be able to talk about the >> Fourier transform for it. May be you do not think this is the correct way to >> look at it. > > It depends on the context. Circular convolution is a fact. > Unless you take certain precautions, the result of a convolution > implemented in spectrum domain as multiplications of DFTs of > signals *behaves* *as* *if* the signals were periodic. This > does *not* mean that the signals actually *are* periodic, but > is merely a computational artefact caused by the peridoic > properties of the exponential basis functions. > > Rune
Rune pretty much tells it like it is. The one thing I struggle with is to hear that the "periodic assumption" is somehow unfortunate. I've heard it said here on occasion but don't really understand. And I've said the opposite many times and nobody has been confrontational enough to take me on - or, if anyone did, I don't remember :-) Rune's comment about periodic basis functions likely helps a lot. My own view has always been this: If one has a finite sequence of samples then *that sequence* necessarily represents a function whose transform (or inverse transform) is periodic.... and vice versa. The transform can be continuous or discrete (although the former becomes the latter anyway in this case) and finite or infinite. Otherwise I think that discussion of circular convolutions, etc. wouldn't apply .. but they do. I emphasize "that sequence* because, presumably, that's all we have. I guess that one could say that the sequence could "represent an infinite number of possible sequences taken *outside* the interval - which I guess is what's meant by "the signal" in some context. That possibility doesn't seem useful to me. What's to be done with or about it? If we were living in an abstract world then we could talk about signals of infinite extent, etc. with all sorts of interesting properties, etc. But when we're dealing with real stuff then it's all moot. The only thing that matters are the properties of the real stuff. That doesn't mean there still can't be rules, theorems, laws, etc. Having the dreaded periodicity in the mind's eye is an invaluable tool as far as I'm concerned. So, what am I missing here? Fred
On Feb 14, 10:24&#4294967295;am, Fred Marshall <fmarshallx@remove_the_xacm.org>
wrote:
> > ... > If we were living in an abstract world then we could talk about signals > of infinite extent, etc. with all sorts of interesting properties, etc. > &#4294967295; But when we're dealing with real stuff then it's all moot. &#4294967295;The only > thing that matters are the properties of the real stuff. &#4294967295;That doesn't > mean there still can't be rules, theorems, laws, etc. > > Having the dreaded periodicity in the mind's eye is an invaluable tool > as far as I'm concerned. &#4294967295;So, what am I missing here? > > Fred
Fred As long as you can recognize which properties belong to 'signals of infinite extent,etc. with all sorts of interesting properties, etc.' and that when you are working with real samples 'the only thing that matters are the properties of the real stuff' then you aren't missing anything here. We get a lot of questions from people who expect the properties of signals of infinite/continuous extent to apply to finite discrete samples asking how to realize their expectations. What is broken here is their expectation, as well perhaps as their code. But fixing the code won't satisfy the expectations. The expectations need to be fixed, too. The most frequent keyword used in justification of applying infinite/continuous expectations to discrete finite real sampled data seems to be 'periodic assumption'. I think that this is an 'unfortunate' abuse of a powerful concept we agree on the value of. Dale B. Dalrymple
"Nasser M. Abbasi" <nma@12000.org> wrote in message
<SNIP>

Thanks for the input Rune and everyone else.

I did the sinc interpolation sum from k=0 to the number of samples -1.  i.e. 
one sided. I did not make any assumptions of periodicity. I know there are 
more advanced ways to do this using windowing and such, but this 
implementation is just a basic one. I sampled the sinc function at a higher 
rate than what is used to sample the original signal in order to get a 
smoother plot to show.

I have a "beta" version of this small program. This implements the sinc 
interpolation formula as one changes the sampling frequency.  One can select 
few predefined signals to sample. The error between the reconstructed signal 
and the original signal is also plotted. One can see that the error becomes 
less as sampling frequency increases. Due to the limited number of samples, 
there will always be a small error even at Fs>Nyquest.

If someone wants to try the program and let me know if they spot any 
"issues" or things, that will be nice, and I will try to correct any 
problems.

You can email me or post here, as you like. This is a free code, and meant 
to be for educational purposes (I learn better by doing, so I like to 
implement something to learn it better when possible).

There is also a gif animation on the page below if you do not want to 
download the program to your computer.

This was written in Mathematica, but runs using the free Mathematica player 
(from WRI web site, just google "free Mathematica player ". No need to have 
Mathematica on your computer to run these programs.

It is the first program on my page below:

http://12000.org/my_notes/mma_demos/index.htm

Thanks,

--Nasser 


On Feb 14, 5:39 am, Rune Allnor <all...@tele.ntnu.no> wrote:
> On 14 Feb, 09:57, "Nasser M. Abbasi" <n...@12000.org> wrote: > > > > > "Rune Allnor" <all...@tele.ntnu.no> wrote in message > > >news:a117e885-bc9a-4bdc-91a9-6efdf4a8a813@j31g2000yqa.googlegroups.com... > > > > On 14 Feb, 07:48, "Nasser M. Abbasi" <n...@12000.org> wrote: > > > >> So, since this formula assumed the signal being sampled is periodic, > > >> (that > > >> is after all how this formula same about), > > > > Could you justify both those claims, please? > > > That the signal is periodic and that this property > > > is somehow necessary or essential for the analysis? > > > > Rune > > > hi Rune, > > > May be what I said was not too clear. What I meant is this: > > > The derivation of the sinc interpolation formula starts by taking the > > Fourier transform X(w) of x(t), then it finds the relation of X(w) to the > > discrete Fourier transform of x[n], which is obtained from x(t) by sampling. > > right? This is how it is derived in my textbook (DSP by oppenheim, 1975, > > page 29). > > Sure. > > > But when we are given some x(t) of limited time frame, say t=0 to t=t0, and > > we want the Fourier transform of x(t), then we assume this signal is > > periodic of period 0..t0 and then let the period go to infinity to obtain > > the Fourier transform of x(t) from the fourier coefficients. Right? > > Wrong. > > What one needs to understand is that there are several variations > of the Fourier transform around, as well as some more "Fourier- > like" transforms. The four basic forms of the FT are classified > in terms of whether the signal domain is > > 1) Countinous-time (CT) or discrete-time (DT) > 2) The signal supposrt is of finite (F) or infinite extent (I). > > From these classes alone, you have four transforms: > > 1) CT-I - Fourier transform > 2) CT-F - Fourier series > 3) DT-I - Discrete-Time Fourier Transform, DTFT > 4) DT-F - Discrete Fourier Transform, DFT. > > For all these variants the requirement is that the sum or > integral of the squared time signal over the signal support > is finite (i.e. finite energy signals) for the FT to exist: > > sum |x[n]|^2 < infinite > > for the DT variants and > > integral |x(t)|^2 dt < infinite > > for the CT variants. > > One can also modify the FT to allow for finite *power* signals, > (view with fixed-width font) > > 1 N > lim ---- sum |x[n]|^2 < inf > N->inf 2N-1 n=-N >
i think you meant 2N+1 in the denom, but the condition is sufficient nonetheless.
> (and a similar relation for the CT-I variant), i.e. the infinite- > duration signals have finite power. > > The problem is that these different variants were not recognized > at first, so some early authors, who wanted to compute the DT-I > coefficients by means of the FFT, which computes DT-F coefficients, > started messing around with sloppy remarks about the DFT being > governed by assumptions about signal periodicity.
now, i dunno what Nasser meant about his assumption of signal periodicity, but even with Fred's and Dale's support, i (as well as early authors like O&S) continue to recognize the utter non-difference in the equations regarding the Discrete Fourier Series (DFS) and the Discrete Fourier Transform (DFT). the DFT maps a discrete and periodic sequence in one domain (call it the "time domain") to another discrete and periodic sequence in the reciprocal domain (we'll call it the "frequency domain") having the same integer period. that's what the DFT is, whether or not any human beings make any assumptions about periodicity of the signal they are considering. we're not assuming this. we're recognizing it.
> Since this blunder somehow lipped through peer reviews
it's no blunder. in fact, it is *missing* this fact and assuming otherwise (that perhaps the DFT assumes something else, like zero extension) that is the blunder.
> - no one > seem to have been aware of the relevance and importance of the > details at the time - and got published in one of the leading > journals, the myths and superstitions about periodic properties > of data have remained to this day.
we know from the continuous Fourier Transform that periodically extending a function in one domain (by summing together shifted copies) causes sampling of the transform of that function in the other domain. likewise sampling (with impulses) in one domain causes periodic extension in the other. something that is both sampled (made discrete) *and* periodic in one domain will have its transform also discrete and periodic in the reciprocal domain.
> > This is how my textbook finds the Fourier transform > > of a signal that is defined over some limited time duration.
listen, when you simply look at a finite snippet of signal (which is the same as the signal of possible infinite extent being multiplied by a window function), you have no idea what it was outside of that finite domain. but as soon as you pass it to the DFT, you've periodically extended, that is how the DFT is going to deal with it, as if you've passed to it exactly one period of a discrete and periodic signal. whether it was periodic or not before applying the windowing operation does not matter, the DFT (and its associated operations and theorems) will "assume" so and will treat it as such.
> > > I mean x(t) might not be periodic of course, ie. it can be aperiodic. but we > > have limited time view of it. > > Sure. Which mean we do not know anything about the properties > of the function outside the time gate. > > Now, a complicating factor is that the finite-domain *basis* > *functions* are periodic, so for the purposes of arithmetics, > it makes a lot of sense to review the signal as periodic outside > the observation interval. > > This is, however, a consequence of the choice of basis functions > and *not* a property of the signal as such.
i agree with that Rune. but when you pass it to the DFT, the DFT uses periodic basis functions and views that finite snippet of data as periodic to fit to it.
> > So, what I meant to say is that x(t) is *assumed* to be periodic over the > > length of the time record. Again, this is just to be able to talk about the > > Fourier transform for it.
the *Discrete* Fourier transform for it.
> > Maybe you do not think this is the correct way to look at it. > > It depends on the context. Circular convolution is a fact. > Unless you take certain precautions, the result of a convolution > implemented in spectrum domain as multiplications of DFTs of > signals *behaves* *as* *if* the signals were periodic. This > does *not* mean that the signals actually *are* periodic,
it does means, as far as the DFT (and its properties) are concerned, the signal *it* is looking at is one period of a periodic sequence.
> but is merely a computational artifact caused by the periodic > properties of the [sinusoidal] basis functions.
Nasser, i have been involved in "periodic" fights here at comp.dsp about this inherent property of the DFT that periodically extends the finite set of data passed to it (because the basis functions of the DFT are all periodic sharing a common period). r b-j
On Feb 14, 12:57 am, "Nasser M. Abbasi" <n...@12000.org> wrote:

> ... > The derivation of the sinc interpolation formula starts by taking the > Fourier transform X(w) of x(t), then it finds the relation of X(w) to the > discrete Fourier transform of x[n], which is obtained from x(t) by sampling. > right? This is how it is derived in my textbook (DSP by oppenheim, 1975, > page 29). > ... > --Nasser
I would suggest that you consider in the paragraph between equations 1.132 and 1.133 the warning about band limiting and time aliasing. They may be part of your results. Chapter 3 deals with the issues of periodic sequences. Look there again for some of your answers. Be careful there because there is a change in notation to x and X with tilde for infinite periodic sequences and x and X alone for finite periodic sequences. In chapter 3 Oppenheim and Schafer 1975 never fail to make the explicit distinction between finite periodic and infinite periodic sequences. Not all readers succeed in making that leap. Also, if you look through the archives of comp.dsp, be aware that there is another book by Oppenheim and Schafer DTSP 1989. Not all references to "O&S" have clearly distinguished between the two. For example, O&S 1975 Chapter 3 content moved to O&S 1989 Chapter 8. Thanks for making your reference clear. Dale B. Dalrymple