DSPRelated.com
Forums

How does pole placement adjust the filter phase difference?

Started by valtih1978 September 14, 2015
Linear systems can be represented like series of feedback nodes,
http://valjok.blogspot.com.ee/search?q=linear

It is not a big surprise therefore that the exponentials z^t = e^jwt 
serve as basis for them. But, only now, trying to establish the 
connection between these functions, the generating functions, poles and 
frequency response, I have recalled what I knew for many years, that 
combination of two sines, whether serial, like sin kt * sin (mt), or 
parallel, like sin (kt) * sin mt, both result in another A sine (rt) 
wave, assuming that both source sine frequencies are identical. Euler's 
sin(wt) = e^jwt easily explains that.

That is, if we have one exponent a^(x+y), which is y time units ahead of 
another e^x then we can take difference of them, da^x /dx = a^{x+y} - 
a^x = a^x(a^y - 1). You see, a^y is a constant. Therefore, the 
difference of two exponents is least of them, the common exponent e^x, 
scaled by constant "amplitude" (a^y-1). How large is that amplitude 
scaling? ay-1 is zero when y=0 because we have da^x/dx = a^x - a^x in 
this case. For y > 0, amplitude is positive.

In fact, a may stand for e^iw and we have a proper sine in this case. 
This means amplitude also raises and falls as the phase difference y 
sweeps. You see, the difference of two sines is a also sine wave whose 
rate of growth a^x (aka/or frequency) is a common part of both and 
amplitude (a^y-1) is constant, as long as the phase difference y is 
constant, i.e. when input sines are coherent. Thin can be phrased as 
"the output function is a scaled copy of the input" and the scaling 
factor depends on the phase y. The same applies to sum of two sines 
because it holds in general: A a^{x+y} + B a^x = ax^ (Aa^y + B).

I can express this in terms of generating functions:

Aa^k/(1-ax) + B/(1-ax) = (Aa^k + B)/(1-ax)

where 1/(1-ax) stands for exponent a^x above.

We also know that parallel systems can be converted to serial ones and 
vice versa. If we consider them serially, first sine can be considered 
as input signal whereas second acts as a filter. Does it mean that to 
filter some frequency we need to match filter frequency to that of the 
input and adjust the phase? Yes, we get so called “resonance” as the 
input signal approaches the filter (eigen)frequency. I just wonder how 
do they adjust the phase automatically by placing the poles and zeroes?
On Tue, 15 Sep 2015 00:22:39 +0300, valtih1978 wrote:

> Linear systems can be represented like series of feedback nodes, > http://valjok.blogspot.com.ee/search?q=linear > > It is not a big surprise therefore that the exponentials z^t = e^jwt > serve as basis for them. But, only now, trying to establish the > connection between these functions, the generating functions, poles and > frequency response, I have recalled what I knew for many years, that > combination of two sines, whether serial, like sin kt * sin (mt), or > parallel, like sin (kt) * sin mt, both result in another A sine (rt) > wave, assuming that both source sine frequencies are identical. Euler's > sin(wt) = e^jwt easily explains that. > > That is, if we have one exponent a^(x+y), which is y time units ahead of > another e^x then we can take difference of them, da^x /dx = a^{x+y} - > a^x = a^x(a^y - 1). You see, a^y is a constant. Therefore, the > difference of two exponents is least of them, the common exponent e^x, > scaled by constant "amplitude" (a^y-1). How large is that amplitude > scaling? ay-1 is zero when y=0 because we have da^x/dx = a^x - a^x in > this case. For y > 0, amplitude is positive. > > In fact, a may stand for e^iw and we have a proper sine in this case. > This means amplitude also raises and falls as the phase difference y > sweeps. You see, the difference of two sines is a also sine wave whose > rate of growth a^x (aka/or frequency) is a common part of both and > amplitude (a^y-1) is constant, as long as the phase difference y is > constant, i.e. when input sines are coherent. Thin can be phrased as > "the output function is a scaled copy of the input" and the scaling > factor depends on the phase y. The same applies to sum of two sines > because it holds in general: A a^{x+y} + B a^x = ax^ (Aa^y + B). > > I can express this in terms of generating functions: > > Aa^k/(1-ax) + B/(1-ax) = (Aa^k + B)/(1-ax) > > where 1/(1-ax) stands for exponent a^x above. > > We also know that parallel systems can be converted to serial ones and > vice versa. If we consider them serially, first sine can be considered > as input signal whereas second acts as a filter. Does it mean that to > filter some frequency we need to match filter frequency to that of the > input and adjust the phase? Yes, we get so called “resonance” as the > input signal approaches the filter (eigen)frequency. I just wonder how > do they adjust the phase automatically by placing the poles and zeroes?
Before I even attempt to satisfy you with an answer to what I think your 'real' question may be, I need to point out that you seem to have a very deep misunderstanding of the relationship between signals and systems -- you seem, in fact, to be conflating signals with systems. Yes, a signal and a time-invariant linear system can be described by seemingly-identical expressions in z or s. Yes, a linear system can be completely characterized by its impulse response. But a signal is still just a signal, and a system is both more than a signal, and not a signal at all. So your long preface about how two signals combine has very little to do with the follow-on question about how one signal is changed by running it through a system. If your question hasn't been rendered obsolete by the above observation, try trimming it down to two or three lines and asking again. -- Tim Wescott Wescott Design Services http://www.wescottdesign.com
The preface explains which "phase shift" I am referring to. If you can 
see that without preface, fine. I see no reason to cut it out and see no 
connection with the system != signal issue. I do not see the difference 
between signals, values and systems, to be frank. But, I do not think 
that abriding the preface would make it clear. Furthermore, I would be 
glad to know about the difference, especially to see how much it matters 
in the answer.
On Monday, September 14, 2015 at 5:22:48 PM UTC-4, valtih1978 wrote:
> Linear systems can be represented like series of feedback nodes, > http://valjok.blogspot.com.ee/search?q=linear > > It is not a big surprise therefore that the exponentials z^t = e^jwt > serve as basis for them. But, only now, trying to establish the > connection between these functions, the generating functions, poles and > frequency response, I have recalled what I knew for many years, that > combination of two sines, whether serial, like sin kt * sin (mt), or > parallel, like sin (kt) * sin mt, both result in another A sine (rt) > wave, assuming that both source sine frequencies are identical. Euler's > sin(wt) = e^jwt easily explains that. > > That is, if we have one exponent a^(x+y), which is y time units ahead of > another e^x then we can take difference of them, da^x /dx = a^{x+y} - > a^x = a^x(a^y - 1). You see, a^y is a constant. Therefore, the > difference of two exponents is least of them, the common exponent e^x, > scaled by constant "amplitude" (a^y-1). How large is that amplitude > scaling? ay-1 is zero when y=0 because we have da^x/dx = a^x - a^x in > this case. For y > 0, amplitude is positive. > > In fact, a may stand for e^iw and we have a proper sine in this case. > This means amplitude also raises and falls as the phase difference y > sweeps. You see, the difference of two sines is a also sine wave whose > rate of growth a^x (aka/or frequency) is a common part of both and > amplitude (a^y-1) is constant, as long as the phase difference y is > constant, i.e. when input sines are coherent. Thin can be phrased as > "the output function is a scaled copy of the input" and the scaling > factor depends on the phase y. The same applies to sum of two sines > because it holds in general: A a^{x+y} + B a^x = ax^ (Aa^y + B). > > I can express this in terms of generating functions: > > Aa^k/(1-ax) + B/(1-ax) = (Aa^k + B)/(1-ax) > > where 1/(1-ax) stands for exponent a^x above. > > We also know that parallel systems can be converted to serial ones and > vice versa. If we consider them serially, first sine can be considered > as input signal whereas second acts as a filter. Does it mean that to > filter some frequency we need to match filter frequency to that of the > input and adjust the phase? Yes, we get so called "resonance" as the > input signal approaches the filter (eigen)frequency. I just wonder how > do they adjust the phase automatically by placing the poles and zeroes?
Your question is too deep for me
On Tue, 15 Sep 2015 02:33:34 +0300, valtih1978 wrote:

> The preface explains which "phase shift" I am referring to. If you can > see that without preface, fine. I see no reason to cut it out and see no > connection with the system != signal issue. I do not see the difference > between signals, values and systems, to be frank. But, I do not think > that abriding the preface would make it clear. Furthermore, I would be > glad to know about the difference, especially to see how much it matters > in the answer.
I suggest that you think about systems and signals initially from the perspective of the time-domain (and convolution). Think about what the impulse response means. This may help clarify the distinct roles. The use of transforms is a fantastic tool for efficiently calculating responses but the similarity of form might be confusing you. Once you've got your head around that you can think again about the meaning of phase and frequency.
On Tue, 15 Sep 2015 02:33:34 +0300, valtih1978 wrote:

> The preface explains which "phase shift" I am referring to. If you can > see that without preface, fine. I see no reason to cut it out and see no > connection with the system != signal issue. I do not see the difference > between signals, values and systems, to be frank. But, I do not think > that abriding the preface would make it clear. Furthermore, I would be > glad to know about the difference, especially to see how much it matters > in the answer.
Until you understand the distinction between signals and systems your questions will be meaningless, and sensible attempts to answer them will be meaningless to you. I _really_ suggest that you get your brain into line with standard DSP practice before you proceed. If you don't, you're going to waste a lot of your own time, and the time of anyone who gets sucked into trying to help you. The short answer is that both a system and a signal are mathematical abstractions for real-world phenomenon, but a signal is not a system and a system is not a signal. A signal is something that evolves over time and carries information, and is usually embedded in some physical phenomenon (a voltage on a wire, a number in a register, etc.). A system is something that transforms a signal in some manner, and often causes information in the signal to be lost or degraded*, but (as they are usually considered) does not add information to a signal. Any good book on signal processing should spell out the distinction -- I know mine does (http://www.elsevier.com/books/applied-control-theory-for- embedded-systems/wescott/978-0-7506-7839-1#description). "Signals and Systems" by Oppenheimer and Willsky, with Young (Prentice-Hall) does too, although not it quite a short of form as I give above. This article alludes to the distinction between a signal and a system, but it assumes that the reader already understands: http://wescottdesign.com/articles/zTransform/z-transforms.html * If your system is doing its job, the information that is lost is usually lots of detail about stuff you don't care about, leaving you with whatever information is there about the stuff you _do_ care about, in a form that you can use. Which is the whole point of signal processing, after all. -- Tim Wescott Wescott Design Services http://www.wescottdesign.com
On Tuesday, September 15, 2015 at 9:22:48 AM UTC+12, valtih1978 wrote:
> Linear systems can be represented like series of feedback nodes, > http://valjok.blogspot.com.ee/search?q=linear > > It is not a big surprise therefore that the exponentials z^t = e^jwt > serve as basis for them. But, only now, trying to establish the > connection between these functions, the generating functions, poles and > frequency response, I have recalled what I knew for many years, that > combination of two sines, whether serial, like sin kt * sin (mt), or > parallel, like sin (kt) * sin mt, both result in another A sine (rt) > wave, assuming that both source sine frequencies are identical. Euler's > sin(wt) = e^jwt easily explains that. > > That is, if we have one exponent a^(x+y), which is y time units ahead of > another e^x then we can take difference of them, da^x /dx = a^{x+y} - > a^x = a^x(a^y - 1). You see, a^y is a constant. Therefore, the > difference of two exponents is least of them, the common exponent e^x, > scaled by constant "amplitude" (a^y-1). How large is that amplitude > scaling? ay-1 is zero when y=0 because we have da^x/dx = a^x - a^x in > this case. For y > 0, amplitude is positive. > > In fact, a may stand for e^iw and we have a proper sine in this case. > This means amplitude also raises and falls as the phase difference y > sweeps. You see, the difference of two sines is a also sine wave whose > rate of growth a^x (aka/or frequency) is a common part of both and > amplitude (a^y-1) is constant, as long as the phase difference y is > constant, i.e. when input sines are coherent. Thin can be phrased as > "the output function is a scaled copy of the input" and the scaling > factor depends on the phase y. The same applies to sum of two sines > because it holds in general: A a^{x+y} + B a^x = ax^ (Aa^y + B). > > I can express this in terms of generating functions: > > Aa^k/(1-ax) + B/(1-ax) = (Aa^k + B)/(1-ax) > > where 1/(1-ax) stands for exponent a^x above. > > We also know that parallel systems can be converted to serial ones and > vice versa. If we consider them serially, first sine can be considered > as input signal whereas second acts as a filter. Does it mean that to > filter some frequency we need to match filter frequency to that of the > input and adjust the phase? Yes, we get so called "resonance" as the > input signal approaches the filter (eigen)frequency. I just wonder how > do they adjust the phase automatically by placing the poles and zeroes?
You certainly appear to have the knack of making something simple sound very complicated. A system is something like a filter or motor-amplifier or whatever to which you inject your signal. When it comes to things like convolution in the mathematical sense the distinction gets a little blurred because impulse response is in itself a signal, but don't lose track of the real world.
On Tue, 15 Sep 2015 10:50:47 -0700, gyansorova wrote:

> On Tuesday, September 15, 2015 at 9:22:48 AM UTC+12, valtih1978 wrote: >> Linear systems can be represented like series of feedback nodes, >> http://valjok.blogspot.com.ee/search?q=linear >> >> It is not a big surprise therefore that the exponentials z^t = e^jwt >> serve as basis for them. But, only now, trying to establish the >> connection between these functions, the generating functions, poles and >> frequency response, I have recalled what I knew for many years, that >> combination of two sines, whether serial, like sin kt * sin (mt), or >> parallel, like sin (kt) * sin mt, both result in another A sine (rt) >> wave, assuming that both source sine frequencies are identical. Euler's >> sin(wt) = e^jwt easily explains that. >> >> That is, if we have one exponent a^(x+y), which is y time units ahead >> of another e^x then we can take difference of them, da^x /dx = a^{x+y} >> - a^x = a^x(a^y - 1). You see, a^y is a constant. Therefore, the >> difference of two exponents is least of them, the common exponent e^x, >> scaled by constant "amplitude" (a^y-1). How large is that amplitude >> scaling? ay-1 is zero when y=0 because we have da^x/dx = a^x - a^x in >> this case. For y > 0, amplitude is positive. >> >> In fact, a may stand for e^iw and we have a proper sine in this case. >> This means amplitude also raises and falls as the phase difference y >> sweeps. You see, the difference of two sines is a also sine wave whose >> rate of growth a^x (aka/or frequency) is a common part of both and >> amplitude (a^y-1) is constant, as long as the phase difference y is >> constant, i.e. when input sines are coherent. Thin can be phrased as >> "the output function is a scaled copy of the input" and the scaling >> factor depends on the phase y. The same applies to sum of two sines >> because it holds in general: A a^{x+y} + B a^x = ax^ (Aa^y + B). >> >> I can express this in terms of generating functions: >> >> Aa^k/(1-ax) + B/(1-ax) = (Aa^k + B)/(1-ax) >> >> where 1/(1-ax) stands for exponent a^x above. >> >> We also know that parallel systems can be converted to serial ones and >> vice versa. If we consider them serially, first sine can be considered >> as input signal whereas second acts as a filter. Does it mean that to >> filter some frequency we need to match filter frequency to that of the >> input and adjust the phase? Yes, we get so called "resonance" as the >> input signal approaches the filter (eigen)frequency. I just wonder how >> do they adjust the phase automatically by placing the poles and zeroes? > > You certainly appear to have the knack of making something simple sound > very complicated. > A system is something like a filter or motor-amplifier or whatever to > which you inject your signal. > When it comes to things like convolution in the mathematical sense the > distinction gets a little blurred because impulse response is in itself > a signal, but don't lose track of the real world.
It's pretty easy to keep the distinction sharp if you remember that the impulse response of a system is that system's output given one very specific input. It's like the difference between a CD and the songs on it. It's also a good idea to keep in mind that the impulse response of a system only fully defines a linear system -- in general, the impulse response of a nonlinear system is fairly meaningless. -- Tim Wescott Wescott Design Services http://www.wescottdesign.com
As response to this question suggests, I pretty well understand what is 
convolution in time domain

https://en.wikipedia.org/wiki/Talk:Convolution#Why_the_time_inversion.3F

Probably I should study cooking and all other kind of stuff. This would 
also keep me away from my passion.
> Any good book on signal processing should spell out the distinction -- I > know mine does (http://www.elsevier.com/books/applied-control-theory-for- > embedded-systems/wescott/978-0-7506-7839-1#description). "Signals and
The distinction? Between system and signal? How many years should I drill that movie is not a video player? How many books and decades I need to drill such kid stuff? Once I learn that, you will tell me that value is not a signal and I need to read another book to understand that. This will never end. I guess that is whole the point. This is how it is related to my question.
> http://wescottdesign.com/articles/zTransform/z-transforms.html
Another set of trice told magic recipes. I am fed up with em. Sorry, I appreciate what you say and understand that I should stay here and never try to unwind what stands behind it.