DSPRelated.com
Forums

time-varying dynamic system

Started by Unknown December 21, 2005
I have a time-varying continuous dynamic system described by the
following;

Two couped "integrators", where an integrator is a device that at time
"t" outputs the integral from time 0 to t of its input;

The output of Integrator "A" couples to the input of integrator "B"
with coefficient w1/t, where w1 is a constant and t is time.

The output of integrator "B" is connected to the input of integrator
"A" with coefficient -w1/t.

The initial state of the integrators is (0,0). A Dirac impulse of value
"K"  is applied to integrator "A" at some time "t0".

Simulation of this system shows that it rings with constant amplitude
and decreasing frequency as time progresses.

My question; as t approaches infinity, does the output of this system
"converge" ? If so, how does the converged value relate to "K" and "t0"
?

Thanks for any insights!

Bob Adams

Is this the system you are describing?

   I      W            I       W
X ---> Y ---> a Y / t --- > Z ---> -a Z / t

With I the integrator, defined as y = int_0^t x dt' and w the weighting
steps. An integrator's function is to solve differential equations, so
let's see which DE's are behind. This is easiest done by considering the
numerical approximations. An integrator performs (simplest Euler forward):

y_n+1 = y_n + x delta t

so that the system of equations becomes

Y_n+1 = Y_n + X_n delta t

Z_n+1 = Z_n + (a * Y_n / t_n) delta t

X_n = -a Z_n / t_n

or, after using that (Y_n+1 - Y_n) / deltat ~ dY/dt and 
(Z_n+1 - Z_n) / deltat ~ dY/dt, and substituting y = Y, z = Z, t = t the
differential equations are given by

dY / dt = -a Z / t                           (1)

dZ / dt  = a Y / t                           (2)

Here a problem is immediately clear: at t = 0, this system is not
well-defined as 1/t = +/- infinity. However, the system does have a
homogeneous solution given by 

Y(t) = C1 sin(a ln(t)) + C2 cos(a ln(t))     (3)
 
Z(t) = C1 cos(a ln(t)) + C2 sin(a ln(t))     (4)

This solution will never converge to a certain value, as changing the
time-variable t' = a * ln(t) gives that

Y(t') = C1 sin(t') + C2 cos(t')              (5)
 
Z(t') = C1 cos(t') + C2 sin(t')              (6)

In retrospect, this result could also have been obtained immediately. Eqns
(1) and (2) can also be rewritten as

dY = - Z a dt / t = -Z d (a ln t)            (7)

dZ = Y a dt /t = Y d (a ln t)                (8)

Substituting t'=a ln t, differentiating and substitution gives that

d2 Y / d(t')2 = -Y, which has (5) as its solution, and

d2 Z / d(t')2 = -Z, which has (6) as its solution.

If you are interested, I can E-mail you the Maple worksheet I used for this
analysis.

HTH, Maarten

-- 
===================================================================
Maarten van Reeuwijk                    dept. of Multiscale Physics
Phd student                             Faculty of Applied Sciences
maarten.ws.tn.tudelft.nl             Delft University of Technology
robert.w.adams@verizon.net wrote:
> I have a time-varying continuous dynamic system described by the > following; > > Two couped "integrators", where an integrator is a device that at time > "t" outputs the integral from time 0 to t of its input; > > The output of Integrator "A" couples to the input of integrator "B" > with coefficient w1/t, where w1 is a constant and t is time. > > The output of integrator "B" is connected to the input of integrator > "A" with coefficient -w1/t. > > The initial state of the integrators is (0,0). A Dirac impulse of value > "K" is applied to integrator "A" at some time "t0". > > Simulation of this system shows that it rings with constant amplitude > and decreasing frequency as time progresses. > > My question; as t approaches infinity, does the output of this system > "converge" ? If so, how does the converged value relate to "K" and "t0" > ?
How to you create the time-varying gains? In any event, since gain is dimensionless, w1 must also be in units of time. If in fact, the gains are constant and only their description is wrong, then the circuit is an oscillator. I leave the proof of that to the student. Hint: call the output of A "y". Express the output of B in terms of y; call that 'z'. Express the output of A in terms of z. Remove z as an redundant variable. Solve the resulting differential equation. Jerry -- Engineering is the art of making what you want from things you can get. �����������������������������������������������������������������������
In article <1135157752.076081.111510@o13g2000cwo.googlegroups.com>,
 <robert.w.adams@verizon.net> wrote:
>I have a time-varying continuous dynamic system described by the >following; > >Two couped "integrators", where an integrator is a device that at time >"t" outputs the integral from time 0 to t of its input; > >The output of Integrator "A" couples to the input of integrator "B" >with coefficient w1/t, where w1 is a constant and t is time. > >The output of integrator "B" is connected to the input of integrator >"A" with coefficient -w1/t. > >The initial state of the integrators is (0,0). A Dirac impulse of value >"K" is applied to integrator "A" at some time "t0". > >Simulation of this system shows that it rings with constant amplitude >and decreasing frequency as time progresses. > >My question; as t approaches infinity, does the output of this system >"converge" ? If so, how does the converged value relate to "K" and "t0" >? > >Thanks for any insights! > >Bob Adams >
This is how I've analyzed the system, using o for output and i for input. o_A = \int i_A dt o_B = \int i_B dt i_A = w1/t o_B i_B = -w1/t o_B Starting with o_A, plug in the relation for i_A, then o_B, then i_B, winding up with o_A = - w1^2 \int \int o_A/t^2 dt dt An integral equation, with o_A on both the left hand side and the right hand side. I was never good at solving these sorts of equations, and those that formulated the problem didn't have the courtesy to put an exponential in there, so I don't have a solution off-hand. But maybe that much will help if you're working in a class on that sort of thing. But as t->inf the oscillation will clearly damp away to zero as long as o_A remains finite. o_A may not go to zero, it may go to some finite value and stay there, but oscillations around that point will go to zero. -- "No other major companies were working on [computer-controlled homes], and that was exactly the problem. Microsoft does best when it has a successful competitor it can copy and then crush." -- Marlin Eller, "Barbarians Led by Bill Gates", 1998
Jerry


No I actually meant to imply time-varing gains; at time = 1, the
integrator-to-integrator coupling coefficients have a value of w1; at
time = 2, they have a value of (w1)/2, etc. So this implies an
oscillator where the frequency decreases over time; if you apply an
impulse it oscillates forever, but with decreasing frequency. My
question was about whether or not it "converges" in the sense that if
the frequency becomes infinitely low, it approaches a single value that
is only dependant on when the impulse was applied, (and the amplitude
of course). I think the answer is "No", it does not converge.

By the way, since this is a time-varying system it does not obey the
law of time-invariance; but I think it DOES meet the requirement for
superposition, because (f1(t) + f2(t))/t = f1(t)/t + f2(t)/t. Any
thoughts on this?

Simple C program below that uses very small time steps to simulate the
system;

delta_t = 0.001;
fout = fopen("out.dat","w");
t = 1.0;/** start at 1 so 1/t does not blow up **/
while (t < 1000) {

	if(t == 1.0) input = 1.0; else input = 0.0;
	integr1 += input - delta_t*integr2*w/t;
	integr2 += delta_t*w*integr1/t;
	fprintf(fout,"%lf %lf\n",integr1,integr2);
	t += delta_t; 
}
close(fout);
exit(0);

robert.w.adams@verizon.net wrote:

> ... My question was about whether or not it "converges" in the sense that > if the frequency becomes infinitely low, it approaches a single value > that is only dependant on when the impulse was applied, (and the amplitude > of course). I think the answer is "No", it does not converge.
No, it does not converge, See my other post. The system oscillates as Y(t') = C1 sin(t') + C2 cos(t')&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;(5) Z(t') = C1 cos(t') + C2 sin(t')&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;(6) with t'=a ln(t) and C1 and C2 integration constants.
> By the way, since this is a time-varying system it does not obey the > law of time-invariance;
No, it is not. However, it is time-invariant in t'=a ln t.
> but I think it DOES meet the requirement for > superposition, because (f1(t) + f2(t))/t = f1(t)/t + f2(t)/t. Any > thoughts on this?
They can indeed be superimposed, as the system dY / dt = -a Z / t&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;(1) dZ / dt&#4294967295;&#4294967295;=&#4294967295;a&#4294967295;Y&#4294967295;/&#4294967295;t&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;(2) is linear in X and Y. HTH, Maarten -- =================================================================== Maarten van Reeuwijk dept. of Multiscale Physics Phd student Faculty of Applied Sciences maarten.ws.tn.tudelft.nl Delft University of Technology
robert.w.adams@verizon.net wrote:
> Jerry > > > No I actually meant to imply time-varing gains; at time = 1, the > integrator-to-integrator coupling coefficients have a value of w1; at > time = 2, they have a value of (w1)/2, etc. So this implies an > oscillator where the frequency decreases over time; if you apply an > impulse it oscillates forever, but with decreasing frequency. My > question was about whether or not it "converges" in the sense that if > the frequency becomes infinitely low, it approaches a single value that > is only dependant on when the impulse was applied, (and the amplitude > of course). I think the answer is "No", it does not converge. > > By the way, since this is a time-varying system it does not obey the > law of time-invariance; but I think it DOES meet the requirement for > superposition, because (f1(t) + f2(t))/t = f1(t)/t + f2(t)/t. Any > thoughts on this?
... Thoughts are not calculations, but here's an off-the-cuff shot: With constant gains, the solutions takes the form y(t) = A*sin(wt) or A*cos(wt) (for some suitable t) depending on which integrator's output one looks at. A is determined by the impulse (or initial conditions) and IIRC, w is a function of the product of the gains. In the time-varying case, assuming that the variation rate is small enough compared to the frequency -- what is "enough? -- the solution will still look like y(t) = sin(wt), where w is now k/t^2. Of course, k will have have the dimensions of a period. Then y(t) = sin(k/t), which looks odd, but such is life. Of course, there will be harmonics; sidebands. I wonder what time-varying Bessel functions look like? There will never be a final value for the function, but then, glass is really liquid. Let's call your oscillator's output "high viscosity". Jerry -- Engineering is the art of making what you want from things you can get. &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;
Regarding my original post, I am now faced with the task of converting
this time-varying continuous-time system to a time-varying
discrete-time system. Does anyone have any ideas on how this might be
done?

robert.w.adams@verizon.net wrote:

> Regarding my original post, I am now faced with the task of converting > this time-varying continuous-time system to a time-varying > discrete-time system. Does anyone have any ideas on how this might be > done?
huh? The little C-program you sent in another post seemed pretty discrete to me. Computers normally only understand discrete things, it requires programs for symbolic manipulation such as Maple or Mathematica to solve the 'real' continuous problem. So is your question about time-integration schemes or on how to generalize the system of equations for your oscillator or something else? HTH, Maarten -- =================================================================== Maarten van Reeuwijk dept. of Multiscale Physics Phd student Faculty of Applied Sciences maarten.ws.tn.tudelft.nl Delft University of Technology
In article <1135473501.350782.97100@f14g2000cwb.googlegroups.com>,
 <robert.w.adams@verizon.net> wrote:
>Regarding my original post, I am now faced with the task of converting >this time-varying continuous-time system to a time-varying >discrete-time system. Does anyone have any ideas on how this might be >done? >
The most straight-forward way is to let your differentials become finite differences, and work through the system one time step per iteration. E.g. dv/dt -> (v[i]-v[i-1])/delta_t Or x[i] = x[i-1] + v[i-1]*delta_t + 0.5*a[i-1]*delta_t^2 Slightly more sophisticated would be to use a derivative that eliminates first-order errors. dv/dt -> (v[i+1] - v[i-1])/(2*delta_t) Beyond this level, it might be best to find a book on computational physics. It's a big subject. -- "'No user-serviceable parts inside.' I'll be the judge of that!"