Hi, I've been doing a bit of thinking about true peak estimation. I know that the conventional wisdom on this is to upsample by a factor where the higher the upsample rate and the better the reconstruction filter, the more accurate the estimate. However, I began to wonder what the maximum error might be in the original data. e.g. it is not difficult to imagine a continuous sine wave very close to Nyquist such that the error is effectively infinite but of course in most real world applications there isn't much energy up that close and even if there is, the continuous sine wave isn't an isolated event so you'd eventually see the true peak even if there is a beat frequency to it. With a sine at fs/4 it is easy to imagine a permanent error of 3dB but this is also a bit contrived. For isolated events/signals, I started with an isolated full scale unit sample and worked backwards i.e. the unit sample wouldn't normally exist in a band-limited signal (unless the data was corrupt or otherwise synthesized). So if I band limit to say fc, I get a sinc pulse (ref https://ccrma.stanford.edu/~jos/sasp/Ideal_Lowpass_Filter.html) Now if I manage to mis-sample the sinc pulse so inconveniently that it straddles two samples either side of the peak, I can predict the error and it turns out that this is sinc(pi.fc) where the sinc pulse is normalised to full scale and fc is the normalised bandwidth of the system/signal. As an example, an audio system at 48KHz limited to 22KHz bandwidth should in theory experience a maximum error between samples and true peak of -3.23dB (if I got my maths correct). This isn't a large error, so (the culmination of all this rambling on is the following question): what is the largest error which could exist for a band-limited signal excluding continuous sine-waves i.e. discrete pulses of energy?
Estimation of inter-sample peaks
Started by ●November 9, 2010
Reply by ●November 9, 20102010-11-09
davew <david.wooff@gmail.com> wrote:> I've been doing a bit of thinking about true peak estimation. I know > that the conventional wisdom on this is to upsample by a factor where > the higher the upsample rate and the better the reconstruction filter, > the more accurate the estimate.> However, I began to wonder what the maximum error might be in the > original data. e.g. it is not difficult to imagine a continuous sine > wave very close to Nyquist such that the error is effectively infinite > but of course in most real world applications there isn't much energy > up that close and even if there is, the continuous sine wave isn't an > isolated event so you'd eventually see the true peak even if there is > a beat frequency to it. With a sine at fs/4 it is easy to imagine a > permanent error of 3dB but this is also a bit contrived.Yes, I usually figure the Fs/4 sine with the peak half way beteen two samples, and peak at sqrt(2) times full scale. Note conveniently for such a sine, RMS is equal to full scale voltage.> For isolated events/signals, I started with an isolated full scale > unit sample and worked backwards i.e. the unit sample wouldn't > normally exist in a band-limited signal (unless the data was corrupt > or otherwise synthesized). So if I band limit to say fc, I get a sinc > pulse (ref https://ccrma.stanford.edu/~jos/sasp/Ideal_Lowpass_Filter.html)> Now if I manage to mis-sample the sinc pulse so inconveniently that it > straddles two samples either side of the peak, I can predict the error > and it turns out that this is sinc(pi.fc) where the sinc pulse is > normalised to full scale and fc is the normalised bandwidth of the > system/signal.Some definitions of sinc(x) include the pi, but otherwise...> As an example, an audio system at 48KHz limited to 22KHz bandwidth > should in theory experience a maximum error between samples and true > peak of -3.23dB (if I got my maths correct).That sounds about right. Just a little more than sqrt(2), or 3.01dB.> This isn't a large error, so (the culmination of all this rambling on > is the following question): what is the largest error which could > exist for a band-limited signal excluding continuous sine-waves i.e. > discrete pulses of energy?-- glen
Reply by ●November 10, 20102010-11-10
On 9 Nov., 18:05, davew <david.wo...@gmail.com> wrote:> Hi, > I've been doing a bit of thinking about true peak estimation. �I know > that the conventional wisdom on this is to upsample by a factor where > the higher the upsample rate and the better the reconstruction filter, > the more accurate the estimate. > > However, I began to wonder what the maximum error might be in the > original data. �e.g. it is not difficult to imagine a continuous sine > wave very close to Nyquist such that the error is effectively infinite > but of course in most real world applications there isn't much energy > up that close and even if there is, the continuous sine wave isn't an > isolated event so you'd eventually see the true peak even if there is > a beat frequency to it. �With a sine at fs/4 it is easy to imagine a > permanent error of 3dB but this is also a bit contrived. > > For isolated events/signals, I started with an isolated full scale > unit sample and worked backwards i.e. the unit sample wouldn't > normally exist in a band-limited signal (unless the data was corrupt > or otherwise synthesized). �So if I band limit to say fc, I get a sinc > pulse (refhttps://ccrma.stanford.edu/~jos/sasp/Ideal_Lowpass_Filter.html) > > Now if I manage to mis-sample the sinc pulse so inconveniently that it > straddles two samples either side of the peak, I can predict the error > and it turns out that this is sinc(pi.fc) where the sinc pulse is > normalised to full scale and fc is the normalised bandwidth of the > system/signal. > > As an example, an audio system at 48KHz limited to 22KHz bandwidth > should in theory experience a maximum error between samples and true > peak of -3.23dB (if I got my maths correct). > > This isn't a large error, so (the culmination of all this rambling on > is the following question): �what is the largest error which could > exist for a band-limited signal excluding continuous sine-waves i.e. > discrete pulses of energy?The largest excess (it is not really an error, it just means that the interpolated function has higher values than the samples) for a band- limited signal is not bounded. Consider the pulse defined by sinc- interpolation of the discrete pulse given by p1[n] = [1, 1], where we assume the second point to be at time 0. The sinc- interpolated pulse is p1(t) = sinc(t+1) + sinc(t). If you sample this pulse with an offset of 0.5, you get a maximum value of p1(-0.5) = 1.27... . Now you can add terms to this pulse in the following manner: p_m[n] = [-1,1,-1,...-1,,1,1,-1, ... 1,-1] (starting with a positive or negative 1 depending on the parity of m) and generate the corresponding sinc-interpolated pulse p_m(t). The maximum value of p_m(t) is at t=-0.5, where you have a constructive summing of terms of the order 1/n. So the maximum value grows without bound if you increase the length of the pulse, while the absolute sample values always remain bounded by 1. But, as you mention, such examples are contrived. However, this phenomenon also occurs in practice, a guy at TC wrote an interesting AES paper about that is downloadable from their website: http://www.tcelectronic.com/media/nielsen_lund_2003_overload.pdf. They looked at a number of commercial CDs and counted occurances of peaks in the interpolated waveform. Leading the list is a track by Enimen with more than 25 hot spots, where the interpolated waveform exceeds 1 DBFS. And this paper is from in 2003 (the examples from 2002 and older), you can immagine what it would look like today! Regards, Andor
Reply by ●November 10, 20102010-11-10
On Nov 10, 10:57�am, Andor <andor.bari...@gmail.com> wrote:> On 9 Nov., 18:05, davew <david.wo...@gmail.com> wrote: > > > > > > > Hi, > > I've been doing a bit of thinking about true peak estimation. �I know > > that the conventional wisdom on this is to upsample by a factor where > > the higher the upsample rate and the better the reconstruction filter, > > the more accurate the estimate. > > > However, I began to wonder what the maximum error might be in the > > original data. �e.g. it is not difficult to imagine a continuous sine > > wave very close to Nyquist such that the error is effectively infinite > > but of course in most real world applications there isn't much energy > > up that close and even if there is, the continuous sine wave isn't an > > isolated event so you'd eventually see the true peak even if there is > > a beat frequency to it. �With a sine at fs/4 it is easy to imagine a > > permanent error of 3dB but this is also a bit contrived. > > > For isolated events/signals, I started with an isolated full scale > > unit sample and worked backwards i.e. the unit sample wouldn't > > normally exist in a band-limited signal (unless the data was corrupt > > or otherwise synthesized). �So if I band limit to say fc, I get a sinc > > pulse (refhttps://ccrma.stanford.edu/~jos/sasp/Ideal_Lowpass_Filter.html) > > > Now if I manage to mis-sample the sinc pulse so inconveniently that it > > straddles two samples either side of the peak, I can predict the error > > and it turns out that this is sinc(pi.fc) where the sinc pulse is > > normalised to full scale and fc is the normalised bandwidth of the > > system/signal. > > > As an example, an audio system at 48KHz limited to 22KHz bandwidth > > should in theory experience a maximum error between samples and true > > peak of -3.23dB (if I got my maths correct). > > > This isn't a large error, so (the culmination of all this rambling on > > is the following question): �what is the largest error which could > > exist for a band-limited signal excluding continuous sine-waves i.e. > > discrete pulses of energy? > > The largest excess (it is not really an error, it just means that the > interpolated function has higher values than the samples) for a band- > limited signal is not bounded. Consider the pulse defined by sinc- > interpolation of the discrete pulse given by > > p1[n] = [1, 1], > > where we assume the second point to be at time 0. The sinc- > interpolated pulse is > > p1(t) = sinc(t+1) + sinc(t). > > If you sample this pulse with an offset of 0.5, you get a maximum > value of > > p1(-0.5) = 1.27... . > > Now you can add terms to this pulse in the following manner: > > p_m[n] = [-1,1,-1,...-1,,1,1,-1, ... 1,-1] > > (starting with a positive or negative 1 depending on the parity of m) > and generate the corresponding sinc-interpolated pulse p_m(t). The > maximum value of p_m(t) is at t=-0.5, where you have a constructive > summing of terms of the order 1/n. So the maximum value grows without > bound if you increase the length of the pulse, while the absolute > sample values always remain bounded by 1. > > But, as you mention, such examples are contrived. However, this > phenomenon also occurs in practice, a guy at TC wrote an interesting > AES paper about that is downloadable from their website:http://www.tcelectronic.com/media/nielsen_lund_2003_overload.pdf. They > looked at a number of commercial CDs and counted occurances of peaks > in the interpolated waveform. Leading the list is a track by Enimen > with more than 25 hot spots, where the interpolated waveform exceeds 1 > DBFS. And this paper is from in 2003 (the examples from 2002 and > older), you can immagine what it would look like today! > > Regards, > Andor- Hide quoted text - > > - Show quoted text -Thanks to both. Andor, thanks for the link. In your example I'm not quite sure I can see how the difference between any sample and a peak can grow without bounds. Would you mind providing an expression in t which illustrates this? I tried to model your sequence but can't seem to get it to work. Regards, Dave.
Reply by ●November 10, 20102010-11-10
> > > But, as you mention, such examples are contrived. However, this > > phenomenon also occurs in practice, a guy at TC wrote an interesting > > AES paper about that is downloadable from their website:>>http://www.tcelectronic.com/media/nielsen_lund_2003_overload.pdf. >good paper I am crosspostig this thread to rec.audio.pro Mark
Reply by ●November 11, 20102010-11-11
I think the conclusion to this is that the worst possble transient signal which could possibly be constructed is a sinc pulse corresponding to a bandwidth of fs/2 but mis-sampled by half a sample. If it were sampled exactly, it would simply be a unit sample and therefore the peak value would correspond with the sample instant, so no difference. So I believe that the expression for worst case difference between a sample value and the inter-sample peak is given by: sinc(pi/2) = 2/pi = 3.92dB For periodic signals this doesn't apply.
Reply by ●November 11, 20102010-11-11
On 10 Nov., 16:03, davew <david.wo...@gmail.com> wrote:> On Nov 10, 10:57�am, Andor <andor.bari...@gmail.com> wrote: > > > > > > > On 9 Nov., 18:05, davew <david.wo...@gmail.com> wrote: > > > > Hi, > > > I've been doing a bit of thinking about true peak estimation. �I know > > > that the conventional wisdom on this is to upsample by a factor where > > > the higher the upsample rate and the better the reconstruction filter, > > > the more accurate the estimate. > > > > However, I began to wonder what the maximum error might be in the > > > original data. �e.g. it is not difficult to imagine a continuous sine > > > wave very close to Nyquist such that the error is effectively infinite > > > but of course in most real world applications there isn't much energy > > > up that close and even if there is, the continuous sine wave isn't an > > > isolated event so you'd eventually see the true peak even if there is > > > a beat frequency to it. �With a sine at fs/4 it is easy to imagine a > > > permanent error of 3dB but this is also a bit contrived. > > > > For isolated events/signals, I started with an isolated full scale > > > unit sample and worked backwards i.e. the unit sample wouldn't > > > normally exist in a band-limited signal (unless the data was corrupt > > > or otherwise synthesized). �So if I band limit to say fc, I get a sinc > > > pulse (refhttps://ccrma.stanford.edu/~jos/sasp/Ideal_Lowpass_Filter.html) > > > > Now if I manage to mis-sample the sinc pulse so inconveniently that it > > > straddles two samples either side of the peak, I can predict the error > > > and it turns out that this is sinc(pi.fc) where the sinc pulse is > > > normalised to full scale and fc is the normalised bandwidth of the > > > system/signal. > > > > As an example, an audio system at 48KHz limited to 22KHz bandwidth > > > should in theory experience a maximum error between samples and true > > > peak of -3.23dB (if I got my maths correct). > > > > This isn't a large error, so (the culmination of all this rambling on > > > is the following question): �what is the largest error which could > > > exist for a band-limited signal excluding continuous sine-waves i.e. > > > discrete pulses of energy? > > > The largest excess (it is not really an error, it just means that the > > interpolated function has higher values than the samples) for a band- > > limited signal is not bounded. Consider the pulse defined by sinc- > > interpolation of the discrete pulse given by > > > p1[n] = [1, 1], > > > where we assume the second point to be at time 0. The sinc- > > interpolated pulse is > > > p1(t) = sinc(t+1) + sinc(t). > > > If you sample this pulse with an offset of 0.5, you get a maximum > > value of > > > p1(-0.5) = 1.27... . > > > Now you can add terms to this pulse in the following manner: > > > p_m[n] = [-1,1,-1,...-1,,1,1,-1, ... 1,-1] > > > (starting with a positive or negative 1 depending on the parity of m) > > and generate the corresponding sinc-interpolated pulse p_m(t). The > > maximum value of p_m(t) is at t=-0.5, where you have a constructive > > summing of terms of the order 1/n. So the maximum value grows without > > bound if you increase the length of the pulse, while the absolute > > sample values always remain bounded by 1. > > > But, as you mention, such examples are contrived. However, this > > phenomenon also occurs in practice, a guy at TC wrote an interesting > > AES paper about that is downloadable from their website:http://www.tcelectronic.com/media/nielsen_lund_2003_overload.pdf. They > > looked at a number of commercial CDs and counted occurances of peaks > > in the interpolated waveform. Leading the list is a track by Enimen > > with more than 25 hot spots, where the interpolated waveform exceeds 1 > > DBFS. And this paper is from in 2003 (the examples from 2002 and > > older), you can immagine what it would look like today! > > > Regards, > > Andor- Hide quoted text - > > > - Show quoted text - > > Thanks to both. > Andor, thanks for the link. �In your example I'm not quite sure I can > see how the difference between any sample and a peak can grow without > bounds. �Would you mind providing an expression in t which illustrates > this? �I tried to model your sequence but can't seem to get it to > work. > > Regards, > Dave.I wrote a Mathematica function, it looks like this: nastyFunction[t_, m_] := Sum[(-1)^n Sinc[Pi ( t - n)], {n, 0, m}] - Sum[(-1)^n Sinc[Pi (t + n)], {n, 1, m + 1}] m is the number of terms you want to add on both sides. For m=1 it returns -Sinc[\[Pi] (-1 + x)] + Sinc[\[Pi] x] + Sinc[\[Pi] (1 + x)] - Sinc[\[Pi] (2 + x)] and for m=2 it returns Sinc[\[Pi] (-2 + x)] - Sinc[\[Pi] (-1 + x)] + Sinc[\[Pi] x] + Sinc[\[Pi] (1 + x)] - Sinc[\[Pi] (2 + x)] + Sinc[\[Pi] (3 + x)] and so on. A plot of nastyFunction[-0.5, k], with k varying from 1 to 50 is shown here: https://home.zhaw.ch/~bara/files/discrete%20sequence%20with%20unbounded%20interpolation.gif You can see nicely the logarithmik growth in k (ie. the number of sinc- terms you add). For example nastyFunction[-0.5, 10] = 2.78 nastyFunction[-0.5, 100] = 4.19 nastyFunction[-0.5, 1000] = 5.65 nastyFunction[-0.5, 10000] = 7.11 and so on. Regards, Andor
Reply by ●November 11, 20102010-11-11
Andor <andor.bariska@gmail.com> writes:> [...] > The largest excess (it is not really an error, it just means that the > interpolated function has higher values than the samples) for a band- > limited signal is not bounded.Correct. Here is a proof presented some seven years ago: http://tinyurl.com/3258a5c It may be frustrating, but this means that you cannot bound inter-sample peak differences. -- Randy Yates % "I met someone who looks alot like you, Digital Signal Labs % she does the things you do, mailto://yates@ieee.org % but she is an IBM." http://www.digitalsignallabs.com % 'Yours Truly, 2095', *Time*, ELO
Reply by ●November 11, 20102010-11-11
On Thu, 11 Nov 2010 06:42:28 -0800 (PST), davew <david.wooff@gmail.com> wrote:>I think the conclusion to this is that the worst possble transient >signal which could possibly be constructed is a sinc pulse >corresponding to a bandwidth of fs/2 but mis-sampled by half a >sample. If it were sampled exactly, it would simply be a unit sample >and therefore the peak value would correspond with the sample instant, >so no difference. > >So I believe that the expression for worst case difference between a >sample value and the inter-sample peak is given by: > >sinc(pi/2) = 2/pi = 3.92dB > >For periodic signals this doesn't apply.Jim Lesurf has produced the "Wave from Hell", which is periodic and has an intersample peak of 5dB. It is out there somewhere on the web. d
Reply by ●November 11, 20102010-11-11






