Reply by October 14, 20172017-10-14
There are islands of stability where there can be a little hunting around the lsb or two. Definitely fun to explore.

I see these days where one needs a large number of modulated subcarriers and an FFT is used. E.g. OFDM in LTE.

I recall iDEN using 4 subcarriers, which weren't orthogonal, so my oscillators worked pretty well.

Clay
Reply by October 14, 20172017-10-14
Greg,
True, you get limited by the precision of the coefs, but I never found that to be a big issue. Back when we were stuck with 8 bits, the coarseness was a concern, but by the 90s, I was using 56000 series DSPs with 24 bit coefs. 

With the driven oscillators in Goertzel analyzers, the topology was much more a concern.

Clay
Reply by October 12, 20172017-10-12
On Saturday, September 23, 2017 at 5:08:09 AM UTC-4, Piotr Wyderski wrote:
> Resonant filters are often used for this purpose, > but with a remark that they need some way of amplitude > control, because the numeric errors can make it > rise without control (gain > 1) or decay to 0 (gain < 1). > I don't know about you, but for me it was a great > surprise to discover that for certain "magic" values > of amplitude the error integrated over the entire > period is *exactly* 0, which means that the filter > begins the next period in exactly the same state it > started, so no AGC is needed. For example, for > omega=2*PI/1024 and 24-bit arithmetic the stable > amplitude value is 2163013 (with two_cos = 0x7fff62). > > I thought it might be an interesting result to share. > > Best regards, Piotr
The Matlab code below implements the classic 2-integrator oscillator. No need for stabilization loops, no need to set frequencies to any magic numbers. The only drawback is that on-the-fly frequency changes will cause minor changes in amplitude. I inserted 10-bit quantization in the loop to show that resolution won't cause the amplitude to drift. Bob fs = 48e3; fosc = 1.037e3; % some random frequency not related to fs a = sqrt(2.0 - 2.0*cos(2*pi*fosc/fs)); numsamples = 1000000; integr1 = zeros(numsamples,1); integr2 = zeros(numsamples,1); integr1(1) = 1; % to start it off for k = 2:numsamples integr1(k) = integr1(k-1) + a*integr2(k-1); integr1(k) = (1/1024)*floor(1024*integr1(k)); % quantize to 10 bits to prove precision doesnt matter integr2(k) = integr2(k-1) - a*integr1(k); integr2(k) = (1/1024)*floor(1024*integr2(k)); % quantize to 10 bits to prove precision doesnt matter end figure; plot(integr1);
Reply by Greg Berchin October 12, 20172017-10-12
On Wednesday, October 11, 2017 at 10:33:24 PM UTC-5, cl...@claysturner.com wrote:
 
> You may find these to be of interest: > > http://www.claysturner.com/dsp/2nd_OSC_paper.pdf > > http://www.claysturner.com/dsp/digital_resonators.pdf
Clay, I haven't read the papers in detail, but from a quick look I get the impression that the oscillator frequencies are limited to a discrete set for which the step angle theta = 2PIf/fs can be exactly represented in the numerical format in use, or perhaps even more restrictively, for which cos(theta) or sin(theta) or sin(theta/2), and in some cases their squares, can be exactly represented. Is that correct? Greg
Reply by October 12, 20172017-10-12
On Saturday, September 23, 2017 at 4:08:09 AM UTC-5, Piotr Wyderski wrote:
> Resonant filters are often used for this purpose, > but with a remark that they need some way of amplitude > control, because the numeric errors can make it > rise without control (gain > 1) or decay to 0 (gain < 1). > I don't know about you, but for me it was a great > surprise to discover that for certain "magic" values > of amplitude the error integrated over the entire > period is *exactly* 0, which means that the filter > begins the next period in exactly the same state it > started, so no AGC is needed. For example, for > omega=2*PI/1024 and 24-bit arithmetic the stable > amplitude value is 2163013 (with two_cos = 0x7fff62). > > I thought it might be an interesting result to share. > > Best regards, Piotr
Piotr et al, You may find these to be of interest: http://www.claysturner.com/dsp/2nd_OSC_paper.pdf http://www.claysturner.com/dsp/digital_resonators.pdf At the end of the slides, I show how any simple discrete time oscillator can be used in a Goertzel application. This can much improve the precision of the results. Clay
Reply by October 4, 20172017-10-04
On Sunday, September 24, 2017 at 11:03:41 AM UTC+13, radam...@gmail.com wrote:
> I have used the following resonant filter oscillator structure for a long time. > > Integrator1 -> mult by sqrt(2-2cos(theta)) -> integrator2 -> mult by sqrt(2-2cos(theta)) -> back to integrator1 > > Where integrator1 is delayed by 1 sample and integrator2 has no delay. You start it with an impulse and the amplitude is stable forever, even when you use finite wordlengths. > However, if you change the frequency on-the-fly, the amplitude will shift. > I forget the name of this structure but I think it was published in the 60's > > Bob
Just two integrators with feedback gives zero phase margin, perfect oscillator. Doesn't work that well in analogue unless you are lucky. Often just saturates the output.
Reply by robert bristow-johnson October 2, 20172017-10-02
On Sunday, October 1, 2017 at 6:35:23 AM UTC-4, Piotr Wyderski wrote:
> > The point of this thread is that for a given period > there are certain amplitudes which make the resonator > perfectly stable. It is quite possible that one could > forge a theory explaining a priori which amplitudes > are correct, but in practice brute force scan finds > them in a matter of minutes, so that's a fine kludge. > > Once you have the amplitude, creating a quadrature pair > or arbitrarily more phases is just a matter of selecting > a correct phase offset.
the way to do it for the simple resonator: y[n] = 2 cos(omega) y[n-1] - y[n-2] is to set the initial conditions, y[-1] and y[-2], correctly. if you want a given amplitude and phase (as well as a given frequency, omega) y[n] = A cos(omega n + phi) then y[-1] = A cos(-omega + phi) and y[-2] = A cos(-2 omega + phi) you must set these for your states y[n-1] and y[n-2] to get the specified amplitude and phase. r b-j
> > there is a pretty simple fix to that to make the amplitude constant. > > At least a few, e.g. resetting the oscillator after a full cycle. > With my trick the oscillator effectively resets itself, so there > is no need to do it manually.
no, with the quadrature oscillator, it is a way to adjust the coefficients up or down so that the square of the in-phase plus the square of the quadrature components add to a constant 1.
Reply by Piotr Wyderski October 1, 20172017-10-01
robert bristow-johnson wrote:

 > if you need a quadrature pair of oscillators, you can do
 > it pretty simply but there *is* that concern of the sinusoids
 > growing slowly (unstable) to dying off slowly.

The point of this thread is that for a given period
there are certain amplitudes which make the resonator
perfectly stable. It is quite possible that one could
forge a theory explaining a priori which amplitudes
are correct, but in practice brute force scan finds
them in a matter of minutes, so that's a fine kludge.

Once you have the amplitude, creating a quadrature pair
or arbitrarily more phases is just a matter of selecting
a correct phase offset.

 > there is a pretty simple fix to that to make the amplitude constant.

At least a few, e.g. resetting the oscillator after a full cycle.
With my trick the oscillator effectively resets itself, so there
is no need to do it manually.

	Best regards, Piotr

Reply by robert bristow-johnson September 27, 20172017-09-27
On Wednesday, September 27, 2017 at 2:25:05 AM UTC-4, rickman wrote:
> robert bristow-johnson wrote on 9/26/2017 10:35 PM:
...
> > > > x (c_0 + c_1 x^2 + c_2 x^4) (5 Mults) > > Maybe I'm missing something. Maybe it's the odd symmetry.
yup. the approximated function is sin(pi/2 x), for -1 <= x <= +1, which has odd symmetry.
> Are you saying > you don't need to calculate terms for x^2 and x^4 times appropriate > coefficients?
yup.
> What is the final expression you needed to calculate? The > last line shown would produce > > c_0 x + c_1 x^3 + c_2 x^5 >
yup. and we want c_0 + c_1 + c_2 = 1 and we want c_0 + 3 c_1 + 5 c_2 = 0 with the last degree of freedom, you want to tweek, say, c_2, so that the 3rd harmonic and 5th harmonic have the same amplitude so as to minimize the maximum amplitude of a harmonic. turns out to be about -70 dB from the first harmonic. r b-j
> Isn't that right? Or did you leave out the combining step that might produce > > c_0 + c_0 x + c_1 x^2 + c_1 x^3 + c_2 x^4 + c_2 x^5 > > > >> Viewed the eclipse at Wintercrest Farms, > >> on the centerline of totality since 1998 > > > > i was on the Cerulean-Hopkinsville Road in Kentucky. about 50 feet from the point of greatest eclipse. > > The buildup was pretty cool and totality was awesome although short but the > denouement was rather vapid. Maybe we didn't have enough eclipse dancers. > > -- > > Rick C > > Viewed the eclipse at Wintercrest Farms, > on the centerline of totality since 1998
Reply by rickman September 27, 20172017-09-27
robert bristow-johnson wrote on 9/26/2017 10:35 PM:
> On Tuesday, September 26, 2017 at 6:20:17 PM UTC-4, rickman wrote: >> robert bristow-johnson wrote on 9/26/2017 1:43 PM: >>> On Tuesday, September 26, 2017 at 8:50:55 AM UTC-4, radam...@gmail.com wrote: >>>> Agreed. Even though I've used this trick in the past, when processing power was really low, I haven't used it recently. Just use a standard interpolated sine lookup table and you'll sleep better at night, plus you can get quadrature by just offsetting the pointers modulo N. >>>> >>> >>> it is the simplest and easiest method in the 99% of the cases where you have the memory for the LUT. >> >> The LUT is sized by the resolution required. It works pretty well be using >> half the required phase bits for the address on the LUT and the remaining >> phase bits for the interpolation. > > well, that's how we normally do it. > >> Unless you need a very high resolution >> phase word, the LUT isn't so large for adequate values of "so large". > > i would think so. this ASIC can do 64 voices and has a sampler built in (so it can do a sine-wave from the sampler), but it also has these little DSP blocks with 8 instructions, two inputs, one state, one output (which can also be a state) per block. the instructions can simultaneously multiply, add, and do some other instruction like limit or rectify. but it didn't have a simple LUT.
Specialized devices often are very good at the task they were designed for, but not so good at others.
>>> for a particular synthesizer for a particular synthesizer company in a particular town in Massachusetts i live in, i once did it by generating a simple sawtooth, turning that into a triangle, and running that through a 5th-order, odd-symmetry polynomial. >> >> Am I correct in thinking that is at least five multiplies? Or is it 10? >> Depending on your hardware and your sample rate that's a lot of computation. > > you start with "x". > > then you have x^2 (1 Mult) > > and you have x^4 (2 Mults) > > c_2 x^4 (3 Mults) > > c_1 x^2 (4 Mults) > > c_0 + c_1 x^2 + c_2 x^4 (addition) > > x (c_0 + c_1 x^2 + c_2 x^4) (5 Mults)
Maybe I'm missing something. Maybe it's the odd symmetry. Are you saying you don't need to calculate terms for x^2 and x^4 times appropriate coefficients? What is the final expression you needed to calculate? The last line shown would produce c_0 x + c_1 x^3 + c_2 x^5 Isn't that right? Or did you leave out the combining step that might produce c_0 + c_0 x + c_1 x^2 + c_1 x^3 + c_2 x^4 + c_2 x^5
>> Viewed the eclipse at Wintercrest Farms, >> on the centerline of totality since 1998 > > i was on the Cerulean-Hopkinsville Road in Kentucky. about 50 feet from the point of greatest eclipse.
The buildup was pretty cool and totality was awesome although short but the denouement was rather vapid. Maybe we didn't have enough eclipse dancers. -- Rick C Viewed the eclipse at Wintercrest Farms, on the centerline of totality since 1998