I have the interesting project of desigining and implementing a digital phase locked loop in a micro-controller. The input is a square wave for which the absolute time of the leading edge is taken in hardware and fed into a re-circulating buffer. This input square wave comes from a remote source that is asynchronous to the mico-controller and may vary +/- 10 % from the idea value. A filtered local version of the input is required to be generated by the micro-controller.
I'm having a look at Neil Robinson's two part tutorial on Digital PLLs and Reza Ameli's Discrete Time PLLs, Part 1:Basics I found in DSPrelated's blogs. However these two tutorials seem to deal with loops whose input is a sine wave being sampled manys times fater than its frequency. In my situation the input square wave is not regularly sampled but each leading edge is time stamped by a timer in the micro-controller. I basically need to calculate a corresponding local stream of time stamps with reduced jitter (calculations can be performed at higher resolution than the time stamps).
I need to track the incoming square wave to within 1 or 2 % and at the same time filter the incoming square wave because jitter on the leading edge is expected. The period of the next but one local square wave from the DPLL then needs to be calculated so a micro-controller's timer can be set up ahead of time to generate the local square wave. Calculations will be performed approx 1/2 way between leading edges of the locally generated square wave.
If anyone could refer me to any other tutorials, papers or books that deal with this kind of DPLL I'd be most grateful, or of course any thoughts directly in this forum thread.
So, in one place you say that the input square wave will vary by up to 10% of the expected value (10% of what?), and someplace else you say you need to track it to within 1% to 2%. These directly contradict.
Putting that aside, I've done similar things. Ultimately, you want to capture the time of the incoming edge vs. the edge time of your square wave, and you want to servo your square wave period such that the average error is zero.
Conceptually it's easy: \(n = k_p e_t + k_i \sum e_t \) where \(n\) is the commanded period in clock ticks, \(e_t\) is the error in clock ticks, and \(k_p\) and \(k_i\) are the proportional and integral gains, respectively.
In practice, you'll end up snarled in all the details of tracking the edge timing accurately, and, if you don't want to do the PI loop tuning by the seat of your pants, in figuring them out from first principles.
Thanks for the reply. Curious to know where you got the contradiction from that you mention in the first paragraph. To expand the projects context. The incoming squarewave should be at 1 kHz but in fault conditions we will accept an error of up to +- 10 %. ie 900 Hz to 1.1 kHz. Whatever the input freq it is important that our locally generated squarewave is locked to the incoming one exactly in frequency and its phase to within 2 % of its period (which I did not explain well). Ideally we want 0 deg. phase error. ie with 1 kHz incoming squarewave the timing error of the locally generated squarewave must be better than +- 20 us (~ +- 7 deg.).
Thanks for the "Conceptually it's easy: n=kpet+ki∑et". I'll have a go at implementing it during the next few days.
I did not realize that the \(\pm\)10% was a fallback from a desired \(\pm\)2%. That's what set off the alarm bells.
Without having any information about how rapidly the actual signal varies, how quickly you need to acquire lock, how much noise there is in the transmission of the reference and it's dynamic characteristics, and how much (and how rapidly) your local reference clock varies there's absolutely no way to tell beforehand if you'll meet your goals. Without doing the math on it you're pretty much limited to b'guess and b'gosh -- which is great when you luck out and the stars align.
The analysis in my PLL tutorial still applies -- however, to use that approach you would need to calculate the gain Kp of the time-stamp based phase detector. I guess there is inherent jitter in this phase detection method, but maybe it is not a problem for your purposes.
I've done this sort of thing before. As long as the capturing counter's clock tick is small enough it all works. You can treat it as quantization noise, and use all the established analysis tools as if you were capturing the phase with a magical phase ADC.
I once implemented a TDC phase detector who's input was just an external clock that was captured asynchronously as a single-bit signal. I forget how much oversampling I had, but I was surprised how well the PLL worked.
Thanks everyone for the replies, most helpful. Currently have other problems with the project so this bit of it is delayed but I am making progress and will have to do this DSP bit sooner or later. I'll report back when I have something useful to say.
The counter used to measure the incoming pulse period (rising edge to rising edge) is running 20000 times faster than incoming pulse stream. So the measurement accuracy is 1 part in 20000. So in the usual and almost ideal case measurement jitter will be very low. But we need to anticipate some interference on the signal that could affect the pulse edges, hence the need for some filtering.
One thing I have decided and the boss seemed to like is that the filtering and DPLL will be purely in software and produce a 'model' of the required output. This keeps the closed loop code independant of any hardware and so can be tested off line. I have just written the code to take this DPLL internal 'model' pulse stream and output it on the microcontroller's pin via a timer peripheral. This bit is working rather nicely. All I need now is some time to do the interesting DSP bit.....