Many symbol timing control loop structures consist of TED -> loop filter - >NCO -> Interpolation. The TED is updated once per symbol, but the NCO is driven at the input sample rate. Therefore the error signal must be upsampled to drive the NCO. In a lot of literature I've seen on this , the error is upsampled after the TED and before the PI loop filter. The notable exception is Gardner's paper on interpolation control  where the block diagram in fig 4 shows the loop filter running at the symbol rate, and then its filtered output gets upsampled to drive the NCO. From a computational standpoint, it's a no-brainer to run the loop filter at the lower rate, so it leads me to wonder if there's an advantage to running the loop filter at the higher sample rate?
I've tried implementing them both ways in MATLAB and both seem to work well, so I'm not sure.
 M. Rice. “Digital Communications: A Discrete-Time Approach", Pearson, 2008.
 F. J. Harris and M. Rice. ”Multirate Digital Filters for Symbol Timing Synchronization in Software Defined Radios”, IEEE Journal on Sel Areas in Comms, vol. 19, no. 12, pp. 2346-2357, Dec. 2001.
 P. Fiala and R. Linhart. "Symbol Synchronization for SDR Using a Polyphase Filterbank Based on an FPGA", RadioEngineering, vol. 24, no. 3, Sept. 2015
 F. M. Gardner, “Interpolation in digital modems—Part I: Fundamentals,” IEEE Trans. Commun., vol. 41, pp. 501–507, Mar. 1993.
Many years ago, I worked on the development of the modem for the AT&T VideoPhone 2500.
We had an oversampling rate that allowed for the equivalent of 3 sampling phases per symbol. Through some careful implementation, and treating each phase semi-independently, we were able to produce a phase error for every one of the three. This allowed us to update the PLL on all three phase.
Needless to say, the PLL converged much faster than if we had done the error estimation only once per symbol, and updated the PLL at the symbol rate. We were able to implement a shorter modem startup through this and several other tricks.
Under the assumption that you worked for AT&T at one time, I have a possibly dumb "telephone" question to ask of you. I have a simple ol' fashioned landline touchtone phone in my house. When I speak on the phone my analog speech signal travels through some copper wires to some "telephone company" building. I imagine that somewhere in that building my analog speech signal is applied to an analog-to-digital converter (A/D) for follow-on transmission. My question is: Is my analog speech signal applied to an analog lowpass filter BEFORE the A/D conversion?
Thanks for your confidence in assuming I know something.
The simple answer is yes. The signal sees an analog low pass filter. The process is a little more complicated though, because our A/Ds were of the delta-sigma design more recently, so the low pass is actually at a fairly high frequency.
We did use codecs (as we referred to them) for the analog to digital conversion. The encoding for telephony, as you know, is typically alaw or mu-law compression (or companding) according to G.711. The A/D converters we used were good to about 75-80 dB SNR, and then the compression was done from linear to companded. For the high end telephone modems, we used linear A/D converters internally, and we were shooting for about 85 dB SNR. This gave us sufficient dynamic range for the echo cancellation required to compensate for the hybrid echo inside the modem. (See below for the linear versions)
It has been a number of years, so my memory is not too fresh, but some things I do remember clearly.
We almost always used a bandpass filter on the analog input. I am confident that the high-pass component was typically a third order IIR notch to kill the 50, 60 Hz power line components which often couple into telephony. The pass-band usually started at around 180 Hz. The low pass, if I recall, was typically about a 7th order IIR, with a pass-band cutoff of about 3400 Hz. Both filters were typically elliptical designs. Early designs were analog filters, but the chip sizes were huge because of the required capacitors. Later designs were switched capacitors, which made matching the components a little easier. Even more recent designs were delta-sigma as mentioned.
The last codec design I was involved with (as the systems engineer) was a unique design. The A/D was a delta-sigma design that decimated down to 32 kHz sampling, and then we ran digital filtering for the last stage of decimation down to 8 kHz.
In order to limit the tail of the echo that we needed to deal with, we implemented this last stage as an FIR filter, with the same characteristic as the equivalent elliptical response. The reason was to limit the time domain impulse response to a know finite length for the benefit of the echo cancellers. The avoided the infinite tail of the usual IIR equivalent. The fundamental ideas are contained in the patents https://patents.google.com/patent/US5512898 and https://patents.google.com/patent/US5561424 which are now expired. The honest truth is that I don't remember the difference between them, and I did not take the time to review them for my answer here.
One specific codec number I remember was linear and carried the number T7525. I found many data sheets for the modem chipsets we built that reference use of this codec for telephony audio on the plugin modems. You can find some of these at:
Another codec specifically for telephony (alaw, mu-law) was the T7512 found at:
Unfortunately, neither includes an image of the band-pass response, which I recall some of the data sheets used to have.
Thanks for your question. Has has taken me down memory lane. Sorry of the long-winded answer
Thanks a lot for your detailed May 11th post. I can see that telephone signal processing is much more complicated than I had imagined!
That is fascinating! Thanks for sharing. I wish I knew more people like you with the experience and knowledge of these "industry" designs and clever tricks that are difficult or impossible to find in a book.
Hi Dres. Like you, I am also fascinated by the clever DSP tricks I've encountered in the literature of DSP. And I've been "collecting" some of those tricks for the last 25 years. Places you can go to see some of those tricks are:
 In the "DSP Tips & Tricks" column of the IEEE Signal Processing Magazine, starting with the January 2003 issue. The early column articles are listed in my May 9th post at:
 Many of those  column articles were compiled in the book "Streamlining Digital Signal Processing" published by IEEE/Wiley & Sons Publishing.
 Chapter 13 of my "Understanding Digital Signal Processing", 3rd Edition, book contains descriptions of 51 topics that I think are DSP tricks.
 My blogs here on dsprelated.com include some the "tricks" in above items , , and .
Like Gardner, I've always found it easier to run the loop filter at the symbol rate. There's not really any reason to run it faster, since it's not getting information faster than that.
There is also some analysis by which it is beneficial to run the NCO at as high a clock rate as possible, depending on the system architecture. In any case, I think the secret is really just keeping track of what you're doing and making certain that your analysis takes into account the detector gain and the oscillator gain with respect to the clock rates being used.
This linked presentation may help, as it attempts to explain an analysis technique that uses the detector and oscillator (NCO) gains to mimic a traditional analog loop filter analysis technique. If you do that, it's all pretty straightforward, but there are details to take care of.
I hope that helps.
Thanks a ton! It's always good to hear when people find stuff useful. ;)
I agree that there is no advantage to running the loop filter faster than the rate of the TED. It is not a big deal to do the loop computations with the loop filter and NCO running at different rates. Take a look at Figure 4 and equations 3 and 4 of my post:
Note the post is not about clock recovery per se, but the math is the same.
Thanks for the helpful link!