An optimization problem (real world, not study).
I have a 2x2-matrix A used to iteratively update a 1x2 matrix P=(x, y)
|x'| = | a b | . |x|
|y'| | c d | |y|
Next iteration let P = P', then go again
In my app, a = cos(w), b = sin(w), c = -sin(w), d = cos(w), for a phase angle increment w.
So, with the right initial values,
x = sin(P), y = cos(P),
the updated matrix P' is
x' = sin(P+w), y' = cos(P+w)
This is the common fixed freq quadrature signal using 4 multiplications instead of eval sin, cos at every step.
Now I want to gradually change phase increment angle w to v, in N steps, ie w += (v/N)
until I end up with matrix An, where
a' = cos(w+v) b' = sin(w+v) etc
It's easy (or at my level, doable) to find a second matrix B to update A at each iteration,
|a' b'| = | e f | . | a b |
|c' d'| | g h | | c d |
where
e = cos(v/N), f = sin(v/N), g = -sin(v/N), h = cos(v/N)
Then I use
A1 = B x A
P' = A1 x P
A2 = B x A1
...
An = B x A(n-1)
P' = An x P
and happily note that An == A' (despite the fact that I coded it myself)
This means a second matrix mul A' = B x A at every iteration during the transition phase.
Question:
How do I combine B with A to a single matrix C, so I can skip the second matrix mul
and instead just do
P' = C x P
for N steps, then let A = A'
and continue with
P' = A x P (fixed rotation at the new frequency)
I tried some variants, but guessing gets me nowhere.
Thanks,
Jocke P
fast sine freq change (fast chirp)
Started by ●November 8, 2009
Reply by ●November 9, 20092009-11-09
>An optimization problem (real world, not study). > >I have a 2x2-matrix A used to iteratively update a 1x2 matrix P=(x, y) >|x'| = | a b | . |x| >|y'| | c d | |y| > >Next iteration let P = P', then go again >In my app, a = cos(w), b = sin(w), c = -sin(w), d = cos(w), for a phaseangle increment w.>So, with the right initial values, > x = sin(P), y = cos(P), >the updated matrix P' is > x' = sin(P+w), y' = cos(P+w) > >This is the common fixed freq quadrature signal using 4 multiplicationsinstead of eval sin, cos at every step.> >Now I want to gradually change phase increment angle w to v, in N steps,ie w += (v/N)>until I end up with matrix An, where > a' = cos(w+v) b' = sin(w+v) etc > >It's easy (or at my level, doable) to find a second matrix B to update Aat each iteration,>|a' b'| = | e f | . | a b | >|c' d'| | g h | | c d | >where > e = cos(v/N), f = sin(v/N), g = -sin(v/N), h = cos(v/N) > >Then I use >A1 = B x A >P' = A1 x P >A2 = B x A1 >... >An = B x A(n-1) >P' = An x P >and happily note that An == A' (despite the fact that I coded itmyself)> >This means a second matrix mul A' = B x A at every iteration during thetransition phase.> >Question: >How do I combine B with A to a single matrix C, so I can skip the secondmatrix mul>and instead just do >P' = C x P >for N steps, then let A = A' >and continue with >P' = A x P (fixed rotation at the new frequency)I was able to follow the initial stuff you were saying, but I don't follow the notation of the "Question:" part (last 7-ish lines). My confusion may, in part, be due to the fact that you are not using any notation to distinguish P and P' from one iteration to the next.
Reply by ●November 9, 20092009-11-09
On Nov 8, 9:19�pm, Jocke P <j...@snospamtoreroom.se> wrote:> An optimization problem (real world, not study). > > I have a 2x2-matrix A used to iteratively update a 1x2 matrix P=(x, y) > |x'| = �| a b | . |x| > |y'| � �| c d | � |y| > > Next iteration let P = P', then go again > In my app, �a = cos(w), b = sin(w), c = -sin(w), d = cos(w), for a phase angle increment w. > So, with the right initial values, > � � x = sin(P), y = cos(P), > the updated matrix P' is > � � x' = sin(P+w), y' = cos(P+w) > > This is the common fixed freq quadrature signal using 4 multiplications instead of eval sin, cos at every step. > > Now I want to gradually change phase increment angle w to v, in N steps, ie w += (v/N) > until I end up with matrix An, where > � � a' = cos(w+v) �b' = sin(w+v) �etc > > It's easy (or at my level, doable) to find a second matrix B to update A at each iteration, > |a' b'| = �| e f | . | a b | > |c' d'| � �| g h | � | c d | > where > � � e = cos(v/N), f = sin(v/N), g = -sin(v/N), h = cos(v/N) > > Then I use > A1 = B x A > P' = A1 x P > A2 = B x A1 > ... > An = B x A(n-1) > P' = An x P > and happily note that An == A' �(despite the fact that I coded it myself) > > This means a second matrix mul A' = B x A at every iteration during the transition phase. > > Question: > How do I combine B with A to a single matrix C, so I can skip the second matrix mul > and instead just do > P' = C x P > for N steps, then let A = A' > and continue with > P' = A x P (fixed rotation at the new frequency) > > I tried some variants, but guessing gets me nowhere. > > Thanks, > � � � � Jocke PI don't think you *can* do what you are looking for. The issue is that in your original problem statement, you need to update the matix A (a function of w) with matrix B (a function of v). If you "unroll" the loop this creates (manually multiply A by B n times) I supposed you would end up with an Nth order equation that could be put back into a matrix. Without actually doing a bit of the math I can't visualize the result. The result would be a function of w, v and n, where n is the iteration count variable. Rick
Reply by ●November 9, 20092009-11-09
Thanks to rickman and Michael for helpful responses! Now reformulating my problem with diminishing hope of finding solution... (below) rickman wrote:> > I don't think you *can* do what you are looking for. The issue is > that in your original problem statement, you need to update the matix > A (a function of w) with matrix B (a function of v). If you "unroll" > the loop this creates (manually multiply A by B n times) I supposed > you would end up with an Nth order equation that could be put back > into a matrix. Without actually doing a bit of the math I can't > visualize the result. The result would be a function of w, v and n, > where n is the iteration count variable.Thanks for your reply! Trying to work out better what I'm looking for... I currently have P = some fixed phase angle (eg 0), w = angle increment, v = increase of w per iteration (iter 1) | x1 = cos(P+w) | = A0 . | x0 = cos(P+0w) | | y1 = sin(P+w) | | y0 = sin(P+0w) | (iter 2) | x2 = cos(P+2w) | = A0 . | x1 = cos(P+w) | | y2 = sin(P+2w) | | y1 = sin(P+w) | ...etc where A0 is | cos(w), sin(w) | | -sin(w), cos(w) | This gives fixed-speed rotation of my (x(n), y(n)). If eg w = 10 deg, the (x,y) product of the above matrix mult would generate sin,cos of angles w(n) = 10, 20, 30... deg Now I want to increase w by v=1 on each iteration, so I get the series of angles w(n)+v(n) = 10, 21, 32, 43, 54... deg [PS] But this means I'm looking at a power series of w angles rather than an additive series. Which I guess is what rickman tells me? This is obscured by my formulation in terms of sin( P+w+v ) etc. I thought since I'm already raising the angles to some power (as per the complex number formulation of rotation e^ij.theta ) can't I just raise them some more... [/PS] To finish the problem and verify that this can /not?/ be done, I can increment (or raise?) A0 by multiplying with B = | cos(v), sin(v) | | -sin(v), cos(v) | so A1 = B . A0, A1 = | cos(w+v), sin(w+v) | | -sin(w+v), cos(w+v) | Next, I use the A1 matrix on the (x,y) values from previous iteration | x2 = cos(P+2w+v) | = | cos(w+v), sin(w+v) | . | x1 = cos(P+w) | | y2 = sin(P+2w+v) | | -sin(w+v), cos(w+v) | | y1 = sin(P+w) | Then generate A2 = B . A1, multiply (x3,y3) = A2 . (x2,y2) and so on. But as noted, this requires two matrix multiplications at each iteration, A(m) = B . A(m-1) XY(m+1) = A(m) . XY(m) As hinted before, this does generate the values I want, but at the cost of 8 multiplications. On my desktop, this is only 20-30% faster than using a proper sin(w) call to the C runtime at every point. I believe this can shrink to 3-6 mult by using another matrix [2cos(w), 1 | -1, 0 ], but still far from the 5-10 factor I dreamed about... Question repeated: So, could there be a matrix C that combines the B and A(m) matrix, so that | cos(P+2w+2v) | = C . | cos(P+w+v) | | sin(P+2w+2v) | | sin(P+w+v) | | cos(P+3w+3v) | = C . | cos(P+2w+2v) | | sin(P+3w+3v) | | sin(P+2w+2v) | and so on? Hopefully not less clear then first post... Apologies for inconsistency of notation, I'm trying to do this on highschool math + a book. Thanks for your time and patience, jocke
Reply by ●November 10, 20092009-11-10
can we restate this as being done with a complex sinusoid: On Nov 8, 9:19�pm, Jocke P <j...@snospamtoreroom.se> wrote:> An optimization problem (real world, not study). > > I have a 2x2-matrix A used to iteratively update a 1x2 matrix P=(x, y) > |x'| = �| a b | . |x| > |y'| � �| c d | � |y| > > Next iteration let P = P', then go again > In my app, �a = cos(w), b = sin(w), c = -sin(w), d = cos(w), for a phase angle increment w. > So, with the right initial values, > � � x = sin(P), y = cos(P), > the updated matrix P' is > � � x' = sin(P+w), y' = cos(P+w) >so z = x + i*y and similarly, z' = x' + i*y' . z' = exp(i*w) * z> This is the common fixed freq quadrature signal using 4 multiplications instead of eval sin, cos at every step. > > Now I want to gradually change phase increment angle w to v, in N steps, ie w += (v/N)that isn't exactly what you said.> until I end up with matrix An, where > � � a' = cos(w+v) �b' = sin(w+v) �etc > > It's easy (or at my level, doable) to find a second matrix B to update A at each iteration, > |a' b'| = �| e f | . | a b | > |c' d'| � �| g h | � | c d | > where > � � e = cos(v/N), f = sin(v/N), g = -sin(v/N), h = cos(v/N)yeah. this is why you want to do this the complex number way (and also use some notation for discrete time. so, instead of x, y, and z and x', y', z' we say it like this: z[n+1] = z[n] * exp(i*w[n]) w[n+1] = w[n] + v0 where v0 is an increment we'll worry about later. w[n] = w[0] + v0*n the angle of z[n] is the only thing that gets changed z[n] = z[0]*exp(i*(w[0]+w[1]+..+w[n-1])) = the sum of angles is n-1 N-1 arg{z[n]/z[0]} = SUM{ w[m] } = SUM{w[0] + v0*m} m=0 m=0 which is = w[0]*N + N*(N-1)/2*v0 so you go from w[0] to w[N] in N samples, v0 = (w[N] - w[0])/N . but the angle of the complex discrete signal increases by w[0]*N + (N-1)*(w[N]-w[0])/2 or N*(w[0]+w[N])/2 - (w[N]-w[0])/2 now i'm wondering if this is the answer to your question. use a single matrix with your cos() and sin() terms inside and use the above angle for all of the cos() and sin() functions in your 2x2 matrix. that is what will change the phase angle from the first to the last sample. r b-j
Reply by ●November 10, 20092009-11-10
>To finish the problem and verify that this can /not?/ be done, > >I can increment (or raise?) A0 by multiplying with B = > | cos(v), sin(v) | > | -sin(v), cos(v) | > >so A1 = B . A0, >A1 = >| cos(w+v), sin(w+v) | >| -sin(w+v), cos(w+v) | > >Next, I use the A1 matrix on the (x,y) values from previous iteration >| x2 = cos(P+2w+v) | = | cos(w+v), sin(w+v) | . | x1 = cos(P+w) | >| y2 = sin(P+2w+v) | | -sin(w+v), cos(w+v) | | y1 = sin(P+w) | > >Then generate A2 = B . A1, multiply (x3,y3) = A2 . (x2,y2) >and so on. > >But as noted, this requires two matrix multiplications at eachiteration,>A(m) = B . A(m-1) >XY(m+1) = A(m) . XY(m) >...> >Question repeated: >So, could there be a matrix C that combines the B and A(m) matrix, sothat> >| cos(P+2w+2v) | = C . | cos(P+w+v) | >| sin(P+2w+2v) | | sin(P+w+v) | > >| cos(P+3w+3v) | = C . | cos(P+2w+2v) | >| sin(P+3w+3v) | | sin(P+2w+2v) | > >and so on? >Oleg wrote: As to this concrete question, answer is No. Trivial proof: C = BBBA0 = BBA0 => B must be equil 1. If you need, I can suggest next method (I checked it only in matlab), based on oscillator equation d2X/dt2 = w*w*X, w = 2Pi*frequency Discrette analog is X[k+1] = X[k] + w*Y[k] Y[k+1] = Y[k] - w*X[k], w<1 This is ideal model and generate sin and cos from some initial value R=(x*x + y*y) = 1 But this algorithm is unstable! R will be slowly up or down in this case. Stable iterations are: R = 1- X[k]*X[k] + Y[k]*Y[k] X[k+1] = X[k] + w*(Y[k] + X[k-1]*R)) Y[k+1] = Y[k] - w*(X[k] + Y[k-1]*R)) Performance is 6 product operations on 1 step PS. algorithm is not strict described, sorry
Reply by ●November 10, 20092009-11-10
On Nov 8, 9:19�pm, Jocke P <j...@snospamtoreroom.se> wrote:> An optimization problem (real world, not study). > > I have a 2x2-matrix A used to iteratively update a 1x2 matrix P=(x, y) > |x'| = �| a b | . |x| > |y'| � �| c d | � |y| > > Next iteration let P = P', then go again > In my app, �a = cos(w), b = sin(w), c = -sin(w), d = cos(w), for a phase angle increment w. > So, with the right initial values, > � � x = sin(P), y = cos(P), > the updated matrix P' is > � � x' = sin(P+w), y' = cos(P+w) > > This is the common fixed freq quadrature signal using 4 multiplications instead of eval sin, cos at every step. > > Now I want to gradually change phase increment angle w to v, in N steps, ie w += (v/N) > until I end up with matrix An, where > � � a' = cos(w+v) �b' = sin(w+v) �etc > > It's easy (or at my level, doable) to find a second matrix B to update A at each iteration, > |a' b'| = �| e f | . | a b | > |c' d'| � �| g h | � | c d | > where > � � e = cos(v/N), f = sin(v/N), g = -sin(v/N), h = cos(v/N) > > Then I use > A1 = B x A > P' = A1 x P > A2 = B x A1 > ... > An = B x A(n-1) > P' = An x P > and happily note that An == A' �(despite the fact that I coded it myself) > > This means a second matrix mul A' = B x A at every iteration during the transition phase. > > Question: > How do I combine B with A to a single matrix C, so I can skip the second matrix mul > and instead just do > P' = C x P > for N steps, then let A = A' > and continue with > P' = A x P (fixed rotation at the new frequency) > > I tried some variants, but guessing gets me nowhere. > > Thanks, > � � � � Jocke PHello Jocke, Yes you will effectively need two matrix multiplies, but you don't have to use 4 mults per matrix. For essentially for the same cost as your 1 of your general matrix mults, you can do two "equiamplitude staggered update" oscillator iterations. For a single oscillator with two memory elements "p" and "q" calculate the following constant k=2.0*sin(PI*freq/sample_rate); // 2 sin (half of the step angle per iteration) Then the update iteration (yes without a temporary register!) is the two following statments. q=q-k*p; p=p+k*q; For your application, do this twice with two different "k" values, but have both operations work on the same "p" and "q" values. For some more details on discrete time oscillators see my paper here: http://www.claysturner.com/dsp/2nd_OSC_paper.pdf The drawings for the network forms of the "staggered update oscillators" have sign errors. Fig 5's upper summer's inputs need to both be positive and Fig 6's upper summer's inputs need to have their polarities swapped. I hope this helps. Clay
Reply by ●November 10, 20092009-11-10
Clay wrote:> On Nov 8, 9:19 pm, Jocke P <j...@snospamtoreroom.se> wrote: >> An optimization problem (real world, not study). >> >> I have a 2x2-matrix A used to iteratively update a 1x2 matrix P=(x, y) >> |x'| = | a b | . |x| >> |y'| | c d | |y| >> >> Next iteration let P = P', then go again>> >> Now I want to gradually change phase increment angle w to v, in N steps, ie w += (v/N) >> until I end up with matrix An, where >> a' = cos(w+v) b' = sin(w+v) etc >> >> It's easy (or at my level, doable) to find a second matrix B to update A at each iteration, >> |a' b'| = | e f | . | a b | >> |c' d'| | g h | | c d | >> where >> e = cos(v/N), f = sin(v/N), g = -sin(v/N), h = cos(v/N) >> >> >> This means a second matrix mul A' = B x A at every iteration during the transition phase. >> >> Question: >> How do I combine B with A to a single matrix C, so I can skip the second matrix mul> Hello Jocke, > > Yes you will effectively need two matrix multiplies, but you don't > have to use 4 mults per matrix. > > For essentially for the same cost as your 1 of your general matrix > mults, you can do two "equiamplitude staggered update" oscillator > iterations.Good! Yes, I thought this optimization would be at hand, just that my attempts to use the simpler matrix | 2cos(w) -1 | | 1 0 | with _changing_ frequency have so far not panned out - I've miscalculated, or misthought. I have indeed looked at your paper back&forth, one of the most helpful presentations I've found, thanks! Was planning to go over again to understand better (this is also an iterative process in very small increments)> For a single oscillator with two memory elements "p" and "q" > calculate the following constant > > k=2.0*sin(PI*freq/sample_rate); // 2 sin (half of the step angle > per iteration) > > Then the update iteration (yes without a temporary register!) is the > two following statments. > > q=q-k*p; > p=p+k*q; > > For your application, do this twice with two different "k" values, but > have both operations work on the same "p" and "q" values.This suggestion looks good, Very helpful, thanks a lot! Will want to fiddle with yours and rbj's suggestions now, (hopefully even understanding a bit more how it ticks) if I get it right hundreds of high-pitched bleeps will be singing your praise in real-time! Jocke
Reply by ●November 10, 20092009-11-10
robert bristow-johnson wrote:> can we restate this as being done with a complex sinusoid:Yes, please! Thank you for your clear and pedagogical answer (I know you provide a lot of those in several forums). I might be holed in for a while, feebly poking at calculator & my code to check out your suggestions. They look very much like the/an answer. Any way it goes, just wanted to say thanks now rather than later. - Jocke
Reply by ●November 10, 20092009-11-10
vashkevich wrote:> > Oleg wrote: > > As to this concrete question, answer is No. > Trivial proof: > C = BBBA0 = BBA0 => B must be equil 1.Ah. Of course.> If you need, I can suggest next method (I checked it only in matlab), > based on oscillator equation d2X/dt2 = w*w*X, w = 2Pi*frequency > Discrette analog is > X[k+1] = X[k] + w*Y[k] > Y[k+1] = Y[k] - w*X[k], w<1 > This is ideal model > and generate sin and cos from some initial value R=(x*x + y*y) = 1 > But this algorithm is unstable! R will be slowly up or down in this case. > > Stable iterations are: > R = 1- X[k]*X[k] + Y[k]*Y[k] > X[k+1] = X[k] + w*(Y[k] + X[k-1]*R)) > Y[k+1] = Y[k] - w*(X[k] + Y[k-1]*R)) > > Performance is 6 product operations on 1 step > PS. algorithm is not strict described, sorryLooks very good. I'll be happy to try this version and compare with the other posted. The stability headsup is also useful. When running one fixed freq version I looked after some cycles and find that the values go back to the same bit pattern. In that case the size of floats (32/64-bit) would just determine how close frequencies can be represented. But how do you tell which version is stable? - does it show right in the algorithm, or is there some test ? (other than my rather crude running it and just observing output) Thanks, and cheers, Jocke






