Reply by Clay September 30, 20092009-09-30
On Sep 30, 1:04&#4294967295;pm, Richard Dobson <richarddob...@blueyonder.co.uk>
wrote:
> Clay wrote: > > .. > > > > > > > > >> "less than one sampling interval" is not meaningful if adding in some > >> other constant delay greater than a sampling interval. &#4294967295;so, if it's > >> real-time, i guess Sammy will need some real-fast A/D and D/A (simple > >> audio 1-bit codecs won't do) he can accomplish delay by less than a > >> simapling interval by linearly interpolating between the most current > >> two samples. > > >> &#4294967295; &#4294967295; y[n] = x[n]*(1-t0/T) + x[n-1]*(t0/T) > > >> oh crap! &#4294967295;the ZOH of the D/A will put in another 1/2 sample delay, so > >> Sammy, it would have to be greater than 1/2 sample in any case. > > >> but, if what you want is a precision delay, where the precision is > >> much less than a sampling interval, and you can tolerate a minimum > >> delay of a constant and integer value (say, 32 samples), then you can > >> have two versions of the same signal but one is delayed relative to > >> the other by much less than a sampling interval. > > I am totally confused by this thread. Is it discussing general > fractional delays, or (literally) what the subject line says? If this is > non-causal (offline, etc) you can do whatever you like. but if it is a > real-time stream, I would have thought such a delay is impossible, as > until the second sample arrives (one whole interval behind the first > one) you do not know what values to interpolate between. The minimum > latency will be longer than the target delay. > > Richard Dobson- Hide quoted text - > > - Show quoted text -
Richard, Certainly you bring up a valid point about what the OP is really asking. I skipped ahead to actual causal cases. And in this situation many approaches exist to get the desired result as offered by various responders. Now if the antialias filter is designed to meet certain properties before the signal is sampled, then you can go a long ways towards an approximation to reconstruct the signal using only one sided interpolation. Clearly such filters will be not phase linear. This I believe can get you close to doing your delay, but the "rub" is the signal has a non standard digital form and using sinc reconstruction is not what you want to use. Unser describes non sync types of antialias and reconstruction filter kernals in his paper "Sampling -- 50 years after Shannon." The May 2009 issue of Signal Processing Magazine has an article "Beyond Bandlimited Sampling" by Eldar and Michaeli which explores other types of filters and signal representations which takes this idea furthur. Just more food for thought. Clay
Reply by Jerry Avins September 30, 20092009-09-30
Richard Dobson wrote:
> Clay wrote: > .. >>> >>> "less than one sampling interval" is not meaningful if adding in some >>> other constant delay greater than a sampling interval. so, if it's >>> real-time, i guess Sammy will need some real-fast A/D and D/A (simple >>> audio 1-bit codecs won't do) he can accomplish delay by less than a >>> simapling interval by linearly interpolating between the most current >>> two samples. >>> >>> y[n] = x[n]*(1-t0/T) + x[n-1]*(t0/T) >>> >>> oh crap! the ZOH of the D/A will put in another 1/2 sample delay, so >>> Sammy, it would have to be greater than 1/2 sample in any case. >>> >>> but, if what you want is a precision delay, where the precision is >>> much less than a sampling interval, and you can tolerate a minimum >>> delay of a constant and integer value (say, 32 samples), then you can >>> have two versions of the same signal but one is delayed relative to >>> the other by much less than a sampling interval. >>> > > I am totally confused by this thread. Is it discussing general > fractional delays, or (literally) what the subject line says? If this is > non-causal (offline, etc) you can do whatever you like. but if it is a > real-time stream, I would have thought such a delay is impossible, as > until the second sample arrives (one whole interval behind the first > one) you do not know what values to interpolate between. The minimum > latency will be longer than the target delay.
If you absolutely must get some result and the desired delay is not too large (the user defines "too") one can extrapolate forward from the last sample and the one before it. My general opinion? Ugh! Jerry -- Engineering is the art of making what you want from things you can get. &#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;&#4294967295;
Reply by Richard Dobson September 30, 20092009-09-30
Clay wrote:
..
>> >> "less than one sampling interval" is not meaningful if adding in some >> other constant delay greater than a sampling interval. so, if it's >> real-time, i guess Sammy will need some real-fast A/D and D/A (simple >> audio 1-bit codecs won't do) he can accomplish delay by less than a >> simapling interval by linearly interpolating between the most current >> two samples. >> >> y[n] = x[n]*(1-t0/T) + x[n-1]*(t0/T) >> >> oh crap! the ZOH of the D/A will put in another 1/2 sample delay, so >> Sammy, it would have to be greater than 1/2 sample in any case. >> >> but, if what you want is a precision delay, where the precision is >> much less than a sampling interval, and you can tolerate a minimum >> delay of a constant and integer value (say, 32 samples), then you can >> have two versions of the same signal but one is delayed relative to >> the other by much less than a sampling interval. >>
I am totally confused by this thread. Is it discussing general fractional delays, or (literally) what the subject line says? If this is non-causal (offline, etc) you can do whatever you like. but if it is a real-time stream, I would have thought such a delay is impossible, as until the second sample arrives (one whole interval behind the first one) you do not know what values to interpolate between. The minimum latency will be longer than the target delay. Richard Dobson
Reply by Clay September 30, 20092009-09-30
On Sep 29, 5:32&#4294967295;pm, robert bristow-johnson <r...@audioimagination.com>
wrote:
> On Sep 29, 4:36&#4294967295;pm, Clay <c...@claysturner.com> wrote: > > > > > > > On Sep 29, 4:45&#4294967295;am, "SammySmith" <eigenvect...@yahoo.com> wrote: > > > > Hi all, > > > > Is it possible to delay digital data, by a fraction of the sampling > > > interval. i.e. if fs=1/Ts, where fs is the sampling frequency and Ts the > > > sampling interval. > > > My understanding is that it can be done with interpolation, but that would > > > require a higher clock. Is it possible without using a higer clock? > > > > Regards, > > > Sam > > > If your delay (t0) is much less than half of a sample time, then > > > y(t-t0) = y(t) - (t0)*y'(t) > > > gives a pretty good approximation. > > maybe there's something very basic that i am missing on the outset. > how do you get y'(t)? > > > Higher accuracy comes from extending the number of terms in the Taylor > > approximation. > > > y(t-t0) = y(t) - (t0)*y'(t) + (t0^2)*y''(t)/2 - (t0^3)*y'''(t)/6 + ... > > > Yes you do this with a bank of differentiators. > > are these differentiators causal? > > as i read the subject line of the thread, i think there is no phase- > linear method better than linear interpolation. > > "sampling interval" implies a discrete-time system. > > "less than one sampling interval" is not meaningful if adding in some > other constant delay greater than a sampling interval. &#4294967295;so, if it's > real-time, i guess Sammy will need some real-fast A/D and D/A (simple > audio 1-bit codecs won't do) he can accomplish delay by less than a > simapling interval by linearly interpolating between the most current > two samples. > > &#4294967295; &#4294967295; y[n] = x[n]*(1-t0/T) + x[n-1]*(t0/T) > > oh crap! &#4294967295;the ZOH of the D/A will put in another 1/2 sample delay, so > Sammy, it would have to be greater than 1/2 sample in any case. > > but, if what you want is a precision delay, where the precision is > much less than a sampling interval, and you can tolerate a minimum > delay of a constant and integer value (say, 32 samples), then you can > have two versions of the same signal but one is delayed relative to > the other by much less than a sampling interval. > > r b-j- Hide quoted text - > > - Show quoted text -
Hello Robert, Well of course all of the differentiators need to be causal - in fact let them have definite symmetries so they are all linear phase. Sure there is an overall delay. y(t) is the original function and if you wish let t = n*T where n is an integer and T is the sample period. Likewise let t0 be a delay less than 1/2 of T. I assume that if he needs 5.2 samples of delay that he can easily handle the 5 part. He then really needs a way to resample the original stream at 0.2 samples offset. The way I proposed allows one to dynamically change the delay in real time. Polyphase methods while you can select from various phases may end up being too coarse in the selection of phases. And yes linear interpolation is another method for resampling. I see the OP is asking about is various ways to resample his signal. If we know something about his expected bandwidth (spectral occupancy) in comparision to the sampling rate, then a better resampler can be designed for his application. I just throwing out an idea for thought. Clay
Reply by Tim Wescott September 29, 20092009-09-29
On Tue, 29 Sep 2009 22:55:51 +0000, glen herrmannsfeldt wrote:

> Tim Wescott <tim@seemywebsite.com> wrote: (snip, I wrote) > > <> For some definition of exact. You can't get rid of the quantization > <> noise from the original sampling, and I believe will add more with > each <> resampling. > > < You would only add quantization noise to the extent that it's already > < there, plus any numerical noise. A delay operation is of necessity an > < all-pass operation so with perfect math you wouldn't add noise there; > < presumably one could use enough precision to make the computation > noise < insignificant -- but that's an assumption that would have to be > checked. > > To be more specific, when the signal is first quantized some information > is lost. That is, quantization noise is added. If from that signal one > generates a signal with a shifted sample point, one can't get the exact > answer that one would have sampling the original signal at those points. > The difference should be small, but non-zero. My guess is that for > resampling half way between sample points that it multiplies the > quantization noise by sqrt(2). Another explantaion is that new > quantization noise is added at the new sample points, with random phase > difference, such that the result is sqrt(2) times as big.
A quick in-the-head 'proof' says that the RMS noise power should be the same for a white signal -- but that wouldn't prevent the peak-peak amplitude of the noise from going up. Note that if the signal is known to be bandlimited below 1/2 the sample rate and the quantization noise hasn't been filtered out yet, one could actually reduce the overall sampling noise power (if not the peak amplitude). -- www.wescottdesign.com
Reply by glen herrmannsfeldt September 29, 20092009-09-29
Tim Wescott <tim@seemywebsite.com> wrote:
(snip, I wrote)
 
<> For some definition of exact.  You can't get rid of the quantization
<> noise from the original sampling, and I believe will add more with each
<> resampling.
 
< You would only add quantization noise to the extent that it's already 
< there, plus any numerical noise.  A delay operation is of necessity an 
< all-pass operation so with perfect math you wouldn't add noise there; 
< presumably one could use enough precision to make the computation noise 
< insignificant -- but that's an assumption that would have to be checked.

To be more specific, when the signal is first quantized some information
is lost.  That is, quantization noise is added.  If from that signal
one generates a signal with a shifted sample point, one can't get the
exact answer that one would have sampling the original signal at those
points.  The difference should be small, but non-zero.  My guess is
that for resampling half way between sample points that it multiplies
the quantization noise by sqrt(2).  Another explantaion is that new
quantization noise is added at the new sample points, with random phase
difference, such that the result is sqrt(2) times as big.  

(snip)

-- glen 
Reply by Tim Wescott September 29, 20092009-09-29
On Tue, 29 Sep 2009 16:06:42 +0000, glen herrmannsfeldt wrote:

> Tim Wescott <tim@seemywebsite.com> wrote: < On Tue, 29 Sep 2009 02:19:31 > -0700, Rune Allnor wrote: > > <> On 29 Sep, 10:45, "SammySmith" <eigenvect...@yahoo.com> wrote: > > <>> Is it possible to delay digital data, by a fraction of the sampling > <>> interval. i.e. if fs=1/Ts, where fs is the sampling frequency and Ts > <>> the sampling interval. > (snip) > > <> You can do something similar with a linear-phase all-pass filter. > Just <> keep in mind that you can't produce a perfect such filter, so > the end <> result will be an approximation of the desired result. > > As written, I would say the answer is no. Note that it says "fraction > of the sampling interval" not "fraction plus some integer times the > sampling interval" (otherwise called an improper fraction)... > > < But but but... > > < If the sampled signal is band limited then the result should be exact, > as < a corollary to perfect reconstruction of a band limited signal. > > For some definition of exact. You can't get rid of the quantization > noise from the original sampling, and I believe will add more with each > resampling.
You would only add quantization noise to the extent that it's already there, plus any numerical noise. A delay operation is of necessity an all-pass operation so with perfect math you wouldn't add noise there; presumably one could use enough precision to make the computation noise insignificant -- but that's an assumption that would have to be checked.
> < Granted, it may take an infinite amount of time to get an answer, but > < what's a bit of delay? > > The OP doesn't seem to indicate the allowable delay, other than "a > fraction of a sampling interval." In other words, real time. > > -- glen
My brain just kind of skidded over that; if the OP really truly wants less than a sample of delay then he basically needs to build a predictor which will of necessity be quite an approximation. OTOH, if he can stand delay it should be close. -- www.wescottdesign.com
Reply by robert bristow-johnson September 29, 20092009-09-29
On Sep 29, 4:36&#4294967295;pm, Clay <c...@claysturner.com> wrote:
> On Sep 29, 4:45&#4294967295;am, "SammySmith" <eigenvect...@yahoo.com> wrote: > > > Hi all, > > > Is it possible to delay digital data, by a fraction of the sampling > > interval. i.e. if fs=1/Ts, where fs is the sampling frequency and Ts the > > sampling interval. > > My understanding is that it can be done with interpolation, but that would > > require a higher clock. Is it possible without using a higer clock? > > > Regards, > > Sam > > If your delay (t0) is much less than half of a sample time, then > > y(t-t0) = y(t) - (t0)*y'(t) > > gives a pretty good approximation.
maybe there's something very basic that i am missing on the outset. how do you get y'(t)?
> Higher accuracy comes from extending the number of terms in the Taylor > approximation. > > y(t-t0) = y(t) - (t0)*y'(t) + (t0^2)*y''(t)/2 - (t0^3)*y'''(t)/6 + ... > > Yes you do this with a bank of differentiators.
are these differentiators causal? as i read the subject line of the thread, i think there is no phase- linear method better than linear interpolation. "sampling interval" implies a discrete-time system. "less than one sampling interval" is not meaningful if adding in some other constant delay greater than a sampling interval. so, if it's real-time, i guess Sammy will need some real-fast A/D and D/A (simple audio 1-bit codecs won't do) he can accomplish delay by less than a simapling interval by linearly interpolating between the most current two samples. y[n] = x[n]*(1-t0/T) + x[n-1]*(t0/T) oh crap! the ZOH of the D/A will put in another 1/2 sample delay, so Sammy, it would have to be greater than 1/2 sample in any case. but, if what you want is a precision delay, where the precision is much less than a sampling interval, and you can tolerate a minimum delay of a constant and integer value (say, 32 samples), then you can have two versions of the same signal but one is delayed relative to the other by much less than a sampling interval. r b-j
Reply by Dirk Bell September 29, 20092009-09-29
On Sep 29, 4:45&#4294967295;am, "SammySmith" <eigenvect...@yahoo.com> wrote:
> Hi all, > > Is it possible to delay digital data, by a fraction of the sampling > interval. i.e. if fs=1/Ts, where fs is the sampling frequency and Ts the > sampling interval. > My understanding is that it can be done with interpolation, but that would > require a higher clock. Is it possible without using a higer clock? > > Regards, > Sam
You haven't said how oversampled the signal is, that would make a difference. Also, if you interpolate up and decimate back down to the original rate, all of this is numerical, so I don't see why an additional clock would be required. I am guessing that problem of adding extra full samples in the delay as brought up by Glen, is not a problem. True? Clay's suggestion of using derivatives will take extra processing (filtering), the design of which will require some information about the signal bandwidth relative to the sample rate. Please provide a little more information. Dirk A. Bell DSP Consultant
Reply by Clay September 29, 20092009-09-29
On Sep 29, 4:45&#4294967295;am, "SammySmith" <eigenvect...@yahoo.com> wrote:
> Hi all, > > Is it possible to delay digital data, by a fraction of the sampling > interval. i.e. if fs=1/Ts, where fs is the sampling frequency and Ts the > sampling interval. > My understanding is that it can be done with interpolation, but that would > require a higher clock. Is it possible without using a higer clock? > > Regards, > Sam
If your delay (t0) is much less than half of a sample time, then y(t-t0) = y(t) - (t0)*y'(t) gives a pretty good approximation. Higher accuracy comes from extending the number of terms in the Taylor approximation. y(t-t0) = y(t) - (t0)*y'(t) + (t0^2)*y''(t)/2 - (t0^3)*y'''(t)/6 + ... Yes you do this with a bank of differentiators. If the needed delay is constant, then all of the terms may be summed to form a single FIR type of structure. An advantage of this approach is if you need variable delay. What will work best for you will depend on so far unspecified parameters. Clay