DSPRelated.com
Forums

representing discrete periodic signal using sinusoidal signals

Started by SBR123 July 21, 2014
Hello All,

I am trying to understand the concept of representing a discrete periodic
signal using sinusoidal signals.

(without going into detailed math), If my understanding is correct, a
discrete periodic signal can be represented using a collection of
sinusoidal signals of different frequencies. The sinusoidal signals would
be amplitude scaled and would have different phases.

Is my understanding broadly correct or is there any other factor involved
in representation of these sinusoidal signals?

Further,
a) would the scaling factors be constant within a given signal?
b) would the scaling factors be constant across sinusoidal signals?

I was trying to understand better by going through DFT equation of a
periodic signal. Basically, what I am observe is that sinusoidal signals
are being multiplied by x(n) (where x(n) is a sample from discrete periodic
signal). If this intrepretation is correct then sinusoidal signals are
being scaled using x(n) terms whose values are not constant ...

I would be glad if expers here can weigh in ...

PS: I am not an DSP expert. My area of work is VLSI and DSP is an
application area for me ...

	 

_____________________________		
Posted through www.DSPRelated.com
On Mon, 21 Jul 2014 10:08:28 -0500, SBR123 wrote:

> Hello All, > > I am trying to understand the concept of representing a discrete > periodic signal using sinusoidal signals. > > (without going into detailed math), If my understanding is correct, a > discrete periodic signal can be represented using a collection of > sinusoidal signals of different frequencies. The sinusoidal signals > would be amplitude scaled and would have different phases. > > Is my understanding broadly correct or is there any other factor > involved in representation of these sinusoidal signals?
You understanding is broadly correct, and with just a bit of work making the statements more precise, can be made to be always exactly correct.
> Further, > a) would the scaling factors be constant within a given signal? > b) would the scaling factors be constant across sinusoidal signals? > > I was trying to understand better by going through DFT equation of a > periodic signal. Basically, what I am observe is that sinusoidal signals > are being multiplied by x(n) (where x(n) is a sample from discrete > periodic signal). If this intrepretation is correct then sinusoidal > signals are being scaled using x(n) terms whose values are not constant > ... > > I would be glad if expers here can weigh in ...
I'm not sure what you're trying to say in this section, so I'm going to answer what I think your question is using terminology that I'm more comfortable with. I think you may be conflating two questions, one being "how can I represent a periodic signal with sinusoids?" and the other being "how do I arrive at such a representation?" I'm going to split these up. How you can represent a periodic signal with sinusoids is easy: Representing a signal that repeats after N samples is the same as representing a vector that's N samples long. If you have N prototype vectors that are N samples long and that are linearly independent, then you can always find a way to weight these N vectors so that their weighted sum equals your original vector. That's the general method. If you decide (for even N -- please just ignore odd N for now) that you want to choose your N prototype vectors to be cos(0 * n * 2*pi/N), cos(n * 2*pi/N), sin(n * 2*pi/N), cos(2*n), ... cos (n * pi) then it's pretty easy to show that each of these vectors is orthogonal to all the others, which property guarantees that the set is linearly independent. So if you can just find weighting factors, you're home free. I'm not going to go into the gory details of the math, but the DFT is just a handy way of finding those weighting factors. The set of weighting factors is always the same for any given signal, but is necessarily different for a different signal. I hope this helps. -- Tim Wescott Control system and signal processing consulting www.wescottdesign.com
On 7/21/14 11:08 AM, SBR123 wrote:
> > I am trying to understand the concept of representing a discrete periodic > signal using sinusoidal signals. >
that's what the Discrete Fourier Series is all about. the Discrete Fourier Transform is the same thing by a slightly different name. either way, the DFT maps a discrete and periodic sequence, with period length of N, x[n], to another discrete and periodic sequence with the same period length, X[k]. and the iDFT maps it back.
> (without going into detailed math),
what's the fun in that?
> If my understanding is correct, a > discrete periodic signal can be represented using a collection of > sinusoidal signals of different frequencies.
and these different frequencies are all integer multiples of a common "fundamental frequency", which is 2*pi/N.
> The sinusoidal signals would > be amplitude scaled and would have different phases.
and together the amplitude and phase are combined into a single complex scaler, X[k]. in the olden days we called them "phasors". in the DFS we just call them "coefficients" and in the DFT (which is the same thing by another name) we call them
> > Is my understanding broadly correct or is there any other factor involved > in representation of these sinusoidal signals? > > Further, > a) would the scaling factors be constant within a given signal?
sure, they're coefficients.
> b) would the scaling factors be constant across sinusoidal signals?
have no idea what you mean by that.
> > I was trying to understand better by going through DFT equation of a > periodic signal. Basically, what I am observe is that sinusoidal signals > are being multiplied by x[n] (where x[n] is a sample from discrete periodic > signal). If this interpretation is correct then sinusoidal signals are > being scaled using x[n] terms whose values are not constant ...
i really don't know what you mean.
> I would be glad if experts here can weigh in ...
okay, here is what it is most concisely: you have a discrete and periodic signal, x[n]. what makes it discrete is that x[n] does not have meaning unless the argument, n, is an integer. what makes it periodic is this: x[n+N] = x[n] for all integers n. so the period length is N. now, Fourier said we can do this: +inf x[n] = A[0] + SUM{ A[k] cos(2*pi*k*n/N) + B[k] sin(2*pi*k*n/N) } k=1 and what Shannon and Nyquist say is this: N/2 x[n] = A[0] + SUM{ A[k] cos(2*pi*k*n/N) + B[k] sin(2*pi*k*n/N) } k=1 (it turns out that B[N/2] can be anything because it always multiplies 0. i'm not dealing with this rigorously.) and Euler says we can turn each sinusoid into complex exponentials: N-1 x[n] = SUM{ X[k] exp(j*2*pi*k*n/N) } k=0 where the X[k] are related to the A[k] and B[k] in some manner. X[0] equals A[0]. now X[k] is *constant* with respect to n. it is a coefficient that multiplies (or "scales") the time-varying term exp(j*2*pi*k*n/N) which i am calling a "sinusoid". and it's complex so |X[k]| scales the size of the sinusoid and the angle of X[k] determines the phase of the sinusoid. now, all the DFT does is, assuming you know x[n] for every 0 <= n < N (or any other complete period), then you can determine the X[k] values pretty readily. turns out that N-1 X[k] = 1/N SUM{ x[n] exp(-j*2*pi*k*n/N) } n=0 that's not hard to derive. also, you might notice that the 1/N factor usually goes onto the first summation and is left out of the latter summation. that's a convention, not a mathematical necessity. but it *is* a mathematical necessity that it goes *somewhere*. we *could*, if we wanted to, split it up and put sqrt(1/N) on both the DFT and iDFT summation and some folks have done that.
> PS: I am not an DSP expert. My area of work is VLSI and DSP is an > application area for me ...
don't sweat it. -- r b-j rbj@audioimagination.com "Imagination is more important than knowledge."
Hello Mr. Wescott,

Thanks a lot. Your reply does clarify a lot of questions I am having.

I have some additional questions based on your reply, that I have added
along side your response.

Thanks a lot.

>How you can represent a periodic signal with sinusoids is easy: > >Representing a signal that repeats after N samples is the same as >representing a vector that's N samples long. If you have N prototype >vectors that are N samples long and that are linearly independent, then >you can always find a way to weight these N vectors so that their >weighted sum equals your original vector.
[SBR123] Though I fully don't appreciate the need for linear independent signals but based on Intuition it probably means that each of these prototypes cannot interfere with other signals and can be modified without taking into account other prototype signals. I hope my understanding is ok.
>If you decide (for even N -- please just ignore odd N for now) that you >want to choose your N prototype vectors to be > >cos(0 * n * 2*pi/N), cos(n * 2*pi/N), sin(n * 2*pi/N), cos(2*n), ... cos >(n * pi) > >then it's pretty easy to show that each of these vectors is orthogonal to
>all the others, which property guarantees that the set is linearly >independent. So if you can just find weighting factors, you're home
free.
>I'm not going to go into the gory details of the math, but the DFT is >just a handy way of finding those weighting factors. The set of >weighting factors is always the same for any given signal, but is >necessarily different for a different signal.
[SBR123] Sir, below are few key questions that I was trying to ask in the original thread (sorry, it did not come out clearly in my posting) ... a) the weighting factors are solely derived from discrete signal x[n] samples. Is this correct? b) if a) is correct then I would assume that after the prototype signals are multiplied with the weighting factors, the shape of the prototype signal can change. I say this because, we have no control over what x[n] value is going to be and hence it can modify the shape of prototype signal in an unpredictable manner. Is this correct? c) how is phase value of each prototype signal determined? Is it that the shape of the prototype signal does not change but rather its phase (and of course, amplitude) due to the weighting factors? _____________________________ Posted through www.DSPRelated.com
On Tue, 22 Jul 2014 03:18:10 -0500, SBR123 wrote:

> Hello Mr. Wescott, > > Thanks a lot. Your reply does clarify a lot of questions I am having. > > I have some additional questions based on your reply, that I have added > along side your response. > > Thanks a lot. > >>How you can represent a periodic signal with sinusoids is easy: >> >>Representing a signal that repeats after N samples is the same as >>representing a vector that's N samples long. If you have N prototype >>vectors that are N samples long and that are linearly independent, then >>you can always find a way to weight these N vectors so that their >>weighted sum equals your original vector. > > [SBR123] Though I fully don't appreciate the need for linear independent > signals but based on Intuition it probably means that each of these > prototypes cannot interfere with other signals and can be modified > without taking into account other prototype signals. I hope my > understanding is ok.
Linear independence just guarantees that you can find a set of weights to represent the original signal, no matter what the original signal may be. Your intuitive meaning is really applicable to orthogonal signals. That's OK, though, because that's what Fourier Transform works with.
>>If you decide (for even N -- please just ignore odd N for now) that you >>want to choose your N prototype vectors to be >> >>cos(0 * n * 2*pi/N), cos(n * 2*pi/N), sin(n * 2*pi/N), cos(2*n), ... cos >>(n * pi) >> >>then it's pretty easy to show that each of these vectors is orthogonal >>to > >>all the others, which property guarantees that the set is linearly >>independent. So if you can just find weighting factors, you're home > free. > >>I'm not going to go into the gory details of the math, but the DFT is >>just a handy way of finding those weighting factors. The set of >>weighting factors is always the same for any given signal, but is >>necessarily different for a different signal. > > [SBR123] Sir, below are few key questions that I was trying to ask in > the original thread (sorry, it did not come out clearly in my posting) > ... > > a) the weighting factors are solely derived from discrete signal x[n] > samples. Is this correct?
See below.
> b) if a) is correct then I would assume that after the prototype signals > are multiplied with the weighting factors, the shape of the prototype > signal can change. I say this because, we have no control over what x[n] > value is going to be and hence it can modify the shape of prototype > signal in an unpredictable manner. Is this correct?
I think we're confusing each other here. In my discussion, when I say "prototype signal", I mean a signal that is now and forevermore just one thing. (The term that's consistent with the Fourier Transform is "basis signal"). Where the confusion arises is that you are taking x[n] as a variable, while I'm taking x[n] as being fixed. It may not be known, but whatever it is just is -- for any given n, it's a constant. You measure a vector x, and there it is. You have a truly periodic signal, and it is the vector x repeated over and over again. It does not change. If it does change, then you're using the wrong notation. So, _for normal DSP terminology_, the answer to (a) is yes, the weighting factors are determined solely from x. I can't answer (a) based on what you're thinking, because I don't know what that is!
> c) how is phase value of each prototype signal determined? > Is it that the shape of the prototype signal does not change but rather > its phase (and of course, amplitude) due to the weighting factors?
The basis signals for the DFT are fixed, now and forevermore (or, at least until you read someone else's text: but whoever they are, their basis signals are fixed now and forevermore, too): y0[n] = 1, x1i[n] = cos(2 * pi * n/N), x1q[n] = sin(2 * pi * n/N), etc. -- Tim Wescott Wescott Design Services http://www.wescottdesign.com
Tim Wescott <tim@seemywebsite.really> writes:

> Linear independence just guarantees that you can find a set of weights to > represent the original signal, no matter what the original signal may > be.
Well, no. The term you're speaking of is "span." If a set of vectors "span" a space, then any element in the space can be represented as a linear combination of those vectors. However, you can have a set of linearly independent vectors that don't span the (signal) space. Example: (1, 0, 0) and (0, 1, 0) in R^3. And if you did have a set of vectors that spanned the signal space, they aren't necessarily linearly independent. Example: (0, 1), (1, 0), and (0, 0) in R^2. -- Randy Yates Digital Signal Labs http://www.digitalsignallabs.com
On Tue, 22 Jul 2014 23:21:47 -0400, Randy Yates wrote:

> Tim Wescott <tim@seemywebsite.really> writes: > >> Linear independence just guarantees that you can find a set of weights >> to represent the original signal, no matter what the original signal >> may be. > > Well, no. The term you're speaking of is "span." If a set of vectors > "span" a space, then any element in the space can be represented as a > linear combination of those vectors. > > However, you can have a set of linearly independent vectors that don't > span the (signal) space. Example: (1, 0, 0) and (0, 1, 0) in R^3. > > And if you did have a set of vectors that spanned the signal space, they > aren't necessarily linearly independent. Example: (0, 1), (1, 0), and > (0, 0) in R^2.
In this case, in addition to getting into mathematics that the OP is probably finding esoteric, the set of vectors has been specified to be N vectors, each N elements long. So spanning and linear independence are equivalent. -- Tim Wescott Wescott Design Services http://www.wescottdesign.com
Tim Wescott <tim@seemywebsite.really> writes:

> On Tue, 22 Jul 2014 23:21:47 -0400, Randy Yates wrote: > >> Tim Wescott <tim@seemywebsite.really> writes: >> >>> Linear independence just guarantees that you can find a set of weights >>> to represent the original signal, no matter what the original signal >>> may be. >> >> Well, no. The term you're speaking of is "span." If a set of vectors >> "span" a space, then any element in the space can be represented as a >> linear combination of those vectors. >> >> However, you can have a set of linearly independent vectors that don't >> span the (signal) space. Example: (1, 0, 0) and (0, 1, 0) in R^3. >> >> And if you did have a set of vectors that spanned the signal space, they >> aren't necessarily linearly independent. Example: (0, 1), (1, 0), and >> (0, 0) in R^2. > > In this case, in addition to getting into mathematics that the OP is > probably finding esoteric, the set of vectors has been specified to be N > vectors, each N elements long. > > So spanning and linear independence are equivalent.
Under the assumptions of this post it is true that linear independence <==> span the signal space, but that does not mean that they are equivalent concepts. An analogy from group theory is this: Given two finite sets A and B, then for a mapping s, 1:1 <==> onto. However, 1:1 does not MEAN the same thing as onto, and I would not explain 1:1 in terms of onto nor would I explain onto in terms of 1:1, even under this assumption (finite sets). Yeah, I think this was a bad way to explain linear independence to the OP in spite of the assumed conditions. -- Randy Yates Digital Signal Labs http://www.digitalsignallabs.com
On Wed, 23 Jul 2014 09:54:58 -0400, Randy Yates wrote:

> Tim Wescott <tim@seemywebsite.really> writes: > >> On Tue, 22 Jul 2014 23:21:47 -0400, Randy Yates wrote: >> >>> Tim Wescott <tim@seemywebsite.really> writes: >>> >>>> Linear independence just guarantees that you can find a set of >>>> weights to represent the original signal, no matter what the original >>>> signal may be. >>> >>> Well, no. The term you're speaking of is "span." If a set of vectors >>> "span" a space, then any element in the space can be represented as a >>> linear combination of those vectors. >>> >>> However, you can have a set of linearly independent vectors that don't >>> span the (signal) space. Example: (1, 0, 0) and (0, 1, 0) in R^3. >>> >>> And if you did have a set of vectors that spanned the signal space, >>> they aren't necessarily linearly independent. Example: (0, 1), (1, 0), >>> and (0, 0) in R^2. >> >> In this case, in addition to getting into mathematics that the OP is >> probably finding esoteric, the set of vectors has been specified to be >> N vectors, each N elements long. >> >> So spanning and linear independence are equivalent. > > Under the assumptions of this post it is true that > > linear independence <==> span the signal space, > > but that does not mean that they are equivalent concepts. > > An analogy from group theory is this: Given two finite sets A and B, > then for a mapping s, > > 1:1 <==> onto. > > However, 1:1 does not MEAN the same thing as onto, and I would not > explain 1:1 in terms of onto nor would I explain onto in terms of 1:1, > even under this assumption (finite sets). > > Yeah, I think this was a bad way to explain linear independence to the > OP in spite of the assumed conditions.
Overall I don't feel like it was my best ever explanation of a mathematical concept. If you want to criticize something that I'll agree with, point out that I could have just started with mutually orthogonal vectors, and left it at that, without even mentioning linear independence, or span, or whatever. -- Tim Wescott Control system and signal processing consulting www.wescottdesign.com
On Wednesday, July 23, 2014 11:47:11 AM UTC-4, Tim Wescott wrote:
> On Wed, 23 Jul 2014 09:54:58 -0400, Randy Yates wrote: > > > > > Tim Wescott <tim@seemywebsite.really> writes: > > > > > >> On Tue, 22 Jul 2014 23:21:47 -0400, Randy Yates wrote: > > >> > > >>> Tim Wescott <tim@seemywebsite.really> writes: > > >>> > > >>>> Linear independence just guarantees that you can find a set of > > >>>> weights to represent the original signal, no matter what the original > > >>>> signal may be. > > >>> > > >>> Well, no. The term you're speaking of is "span." If a set of vectors > > >>> "span" a space, then any element in the space can be represented as a > > >>> linear combination of those vectors. > > >>> > > >>> However, you can have a set of linearly independent vectors that don't > > >>> span the (signal) space. Example: (1, 0, 0) and (0, 1, 0) in R^3. > > >>> > > >>> And if you did have a set of vectors that spanned the signal space, > > >>> they aren't necessarily linearly independent. Example: (0, 1), (1, 0), > > >>> and (0, 0) in R^2. > > >> > > >> In this case, in addition to getting into mathematics that the OP is > > >> probably finding esoteric, the set of vectors has been specified to be > > >> N vectors, each N elements long. > > >> > > >> So spanning and linear independence are equivalent. > > > > > > Under the assumptions of this post it is true that > > > > > > linear independence <==> span the signal space, > > > > > > but that does not mean that they are equivalent concepts. > > > > > > An analogy from group theory is this: Given two finite sets A and B, > > > then for a mapping s, > > > > > > 1:1 <==> onto. > > > > > > However, 1:1 does not MEAN the same thing as onto, and I would not > > > explain 1:1 in terms of onto nor would I explain onto in terms of 1:1, > > > even under this assumption (finite sets). > > > > > > Yeah, I think this was a bad way to explain linear independence to the > > > OP in spite of the assumed conditions. > > > > Overall I don't feel like it was my best ever explanation of a > > mathematical concept. If you want to criticize something that I'll agree > > with, point out that I could have just started with mutually orthogonal > > vectors, and left it at that, without even mentioning linear > > independence, or span, or whatever. > > > > -- > > Tim Wescott > > Control system and signal processing consulting > > www.wescottdesign.com
Beyond completeness offered by a full set of basis vectors, the important aspect of linear independence is you can crank out the amount of each basis vector in the signal without knowing the weights for each of the other basis vectors. Otherwise we would have to resort to inverting a matrix or something similarly difficult to find the weights. Clay