Reply by robert bristow-johnson May 20, 20152015-05-20
On 5/20/15 1:37 AM, Marcel Mueller wrote:
> On 17.05.15 01.26, robert bristow-johnson wrote: >>> Of course, it makes no sense to >>> have only a few samples more. But 167% of the FIR length works quite >>> well, e.g. FIR kernel of 6144 and FFT of 8192. >> >> very inefficient. you do *not* want the FIR kernel to be such a large >> portion of the FFT. > > Why not?
...
> >> i can send you (or anyone else) two short little pdf files of the above >> analysis and of why we want the FIR kernel length to be about 1/5 or 1/6 >> of the FFT size (*not* 3/4 of the FFT). but you need to send me a >> de-spammed return address. > > I prefer practical tests over theory. And the number of FLOPs is by far > not the only parameter. >
but you can still compare apples to apples. the FFT costs (per block) something like : Cost = C1*N*log2(N) + C2*N + C3*Log2N + C4 for some constants C1, C2, C3, C4 to be determined from analysis or profiling of the code performing the FFT and other tasks of the circular convolution. C1 is the cost of performing an FFT butterfly. C2 is the cost of all operations performed on each FFT bin such as multiplication of H[k] to X[k] . C3 is the overhead cost of setting up each FFT pass and C4 is the total constant overhead of setting up each block or frame. It is likely that C3 and C4 can be ignored since they multiply much smaller numbers than C1 and C2. the payoff is processing N - L + 1 samples, where L is the length of the FIR kernel. so the cost per sample to process one block is (C1*N*log2(N) + C2*N + C3*Log2N + C4) / (N-L+1) given a length of FIR L, you want to minimize the above expression. try setting C3 and C4 to zero and C2 approximately to C1 and find what power of 2 for N makes the above expression minimum.. -- r b-j rbj@audioimagination.com "Imagination is more important than knowledge."
Reply by Marcel Mueller May 20, 20152015-05-20
On 17.05.15 01.26, robert bristow-johnson wrote:
>> Of course, it makes no sense to >> have only a few samples more. But 167% of the FIR length works quite >> well, e.g. FIR kernel of 6144 and FFT of 8192. > > very inefficient. you do *not* want the FIR kernel to be such a large > portion of the FFT.
Why not? OK, usually I choose 1:1, but the above is from an equalizer plugin. Although the latency is normally hidden by the prefetch of the player engine large latencies tend to make seek operations lazy. With the above configuration the response time to seek is less than 50ms including network latency for PulseAudio playback on a remote server. Of course, the prefetch engine is still part of the solution. Most of the time I use twice as much samples (12k FIR/16k FFT) for higher accuracy and group delay equalization.
> these are the facts i am dealing with: > > the FIR is partitioned into M segments: > > M-1 > h[n] = SUM h_m[n-D_m] > m=0
[...]
> now the only advantage you get from this is that you have a more > efficient FFT, especially for the (m+1)-th segment because the length of > the FIR kernel is much smaller: 2/7 of the FFT size, but it's better for > both segments. > > those are the quantitative facts i am dealing with, Marcel. having > > L_m = 3/4 N_m > > is just not efficient.
Sorry, I didn't catch what you intend to tell me here. Pretty sure, there are more calculations needed when the overlap window is large, simply because more FFTs have to be calculated in the same time. Choosing a larger FFT in theory has logarithic complexity. But in practice it is worse because most FFTs are memory bound nowadays (even on a Raspberry Pi) and cache efficiency degrades largely with the working set size as soon as you left the L1 cache. Especially because of the scattered access of the FFT algorithm. The above, "inefficient" algorithm even works in real time on a 450 MHz AMD of the 90s (48kHz, 2 channel), although prefetching causes peaks with 100% CPU load on this device. Choosing larger FFTs does not really improve the situation, except that the peaks from prefetching take significantly longer.
> i can send you (or anyone else) two short little pdf files of the above > analysis and of why we want the FIR kernel length to be about 1/5 or 1/6 > of the FFT size (*not* 3/4 of the FFT). but you need to send me a > de-spammed return address.
I prefer practical tests over theory. And the number of FLOPs is by far not the only parameter. Marcel
Reply by robert bristow-johnson May 20, 20152015-05-20
On 5/19/15 5:21 PM, Steve Pope wrote:
> robert bristow-johnson<rbj@audioimagination.com> wrote: > >> On 5/19/15 11:36 AM, Steve Pope wrote: > >>> I would say the latency is ( 32 + (runtime)) samples, where "runtime" >>> is the amount of time required to compute 32 output samples, and >>> conceivably could be quite small (e.g. 1 or 2 samples). > >> and then in that case, your DSP or CPU or whatever is jerking off for >> most of the remaining 30 or 29 samples in the block of time for 32 >> samples. > > Right. I think the usual engineering terminology is more like the > DSP/CPU is "underutilized", or "has a low duty cycle"...
well, you know what they say: "hair grows on the palms of idle hands."
> >> and if you were to *require* a latency of 32 + 1 or 2 samples, >> then you do not have the axiom above that you need not guarantee the >> validity of any of the 32 samples until the very end of your block of time. > > If it's okay to sometimes emit invalid data, perhaps flagging > it as such, you can conceivably save on hardware.
it certainly isn't simple. to me this fast convolution thingie does this in a block time 1. FFT the input data. (must wait until all 32 samples are input.) 2. multiply H[k] x X[k] 3. iFFT the output (then you can't really get to the earliest samples in your blcok. it should be simple: you have 32/Fs seconds to take this buffer or vector of 32 input samples into another chunk of samples. you need all 32 input samples to be good before you can begin and at the end with the iFFT, you'll not likely have the last sample known before the first.
> This was > commonly done in old CD players since the Red book specs (BLER etc.) > were quite liberal and ultimately there is an error-masking > feature applied just before the DAC's.
it might be dither. might be noise shaping. could even be both.
> >> what if you wanted to pack in as much processing as possible in your >> real-time DSP processing application? > > Underutilized hardware is a problem, sometimes. In other > scenarios the power-savings from running hardware at a low > duty cycle is more important that the silicon area. > > It's scenario-dependent.
okay, but the motivation behind my question was regarding this multi-segmented FIR convolution. if i had a grossly "underutilized" DSP that is jerking off for 90% of the time, i would wanna put that lollygagger to useful work. you get more "utilization" out the DSP by increasing the length of the FIR filter kernal. -- r b-j rbj@audioimagination.com "Imagination is more important than knowledge."
Reply by Les Cargill May 20, 20152015-05-20
robert bristow-johnson wrote:
> On 5/18/15 8:04 PM, Les Cargill wrote: >> > ... >> >> For the aforementioned VST interface, it's more than that. I think >> the smallest my ASIO over USB interface goes to is 32 samples. >> >> I don't think you have a 2B problem in addition to that, though. >> > > *if* samples are being processed in 32 sample blocks, which means you > have a block of time equal to 32 sampling periods to process those 32 > samples... >
You may need less... but that fact may or may not be part of the delay calculation.
> *if* all 32 input samples are guaranteed to be valid at the beginning of > your block of time > > and *if* you need not guarantee the validity of any of the 32 output > samples until the very end of your block of time, > > *then* your processing delay must be at least 64 samples. > > L8r, > >
That seems perfectly reasonable. The Gardner paper says you can trade delay for reduced CPU cost. It also says a filter of 128k samples will cost ~427 multiplies. 34 * log2(n) - 151. He seems to be pretty firm about it being *ZERO* delay, although you almost get the feeling it's as if it were a proper name. -- Les Cargill
Reply by Steve Pope May 19, 20152015-05-19
robert bristow-johnson  <rbj@audioimagination.com> wrote:

>On 5/19/15 11:36 AM, Steve Pope wrote:
>> I would say the latency is ( 32 + (runtime)) samples, where "runtime" >> is the amount of time required to compute 32 output samples, and >> conceivably could be quite small (e.g. 1 or 2 samples).
>and then in that case, your DSP or CPU or whatever is jerking off for >most of the remaining 30 or 29 samples in the block of time for 32 >samples.
Right. I think the usual engineering terminology is more like the DSP/CPU is "underutilized", or "has a low duty cycle"...
>and if you were to *require* a latency of 32 + 1 or 2 samples, >then you do not have the axiom above that you need not guarantee the >validity of any of the 32 samples until the very end of your block of time.
If it's okay to sometimes emit invalid data, perhaps flagging it as such, you can conceivably save on hardware. This was commonly done in old CD players since the Red book specs (BLER etc.) were quite liberal and ultimately there is an error-masking feature applied just before the DAC's.
>what if you wanted to pack in as much processing as possible in your >real-time DSP processing application?
Underutilized hardware is a problem, sometimes. In other scenarios the power-savings from running hardware at a low duty cycle is more important that the silicon area. It's scenario-dependent. Steve
Reply by robert bristow-johnson May 19, 20152015-05-19
On 5/19/15 11:36 AM, Steve Pope wrote:
> robert bristow-johnson<rbj@audioimagination.com> wrote: > >> On 5/18/15 8:04 PM, Les Cargill wrote: > >>> For the aforementioned VST interface, it's more than that. I think >>> the smallest my ASIO over USB interface goes to is 32 samples. > >>> I don't think you have a 2B problem in addition to that, though. > >> *if* samples are being processed in 32 sample blocks, which means you >> have a block of time equal to 32 sampling periods to process those 32 >> samples... >> >> *if* all 32 input samples are guaranteed to be valid at the beginning of >> your block of time >> >> and *if* you need not guarantee the validity of any of the 32 output >> samples until the very end of your block of time, >> >> *then* your processing delay must be at least 64 samples. > > I would say the latency is ( 32 + (runtime)) samples, where "runtime" > is the amount of time required to compute 32 output samples, and > conceivably could be quite small (e.g. 1 or 2 samples). >
and then in that case, your DSP or CPU or whatever is jerking off for most of the remaining 30 or 29 samples in the block of time for 32 samples. and if you were to *require* a latency of 32 + 1 or 2 samples, then you do not have the axiom above that you need not guarantee the validity of any of the 32 samples until the very end of your block of time. what if you wanted to pack in as much processing as possible in your real-time DSP processing application? -- r b-j rbj@audioimagination.com "Imagination is more important than knowledge."
Reply by Steve Pope May 19, 20152015-05-19
robert bristow-johnson  <rbj@audioimagination.com> wrote:

>On 5/18/15 8:04 PM, Les Cargill wrote:
>> For the aforementioned VST interface, it's more than that. I think >> the smallest my ASIO over USB interface goes to is 32 samples.
>> I don't think you have a 2B problem in addition to that, though.
>*if* samples are being processed in 32 sample blocks, which means you >have a block of time equal to 32 sampling periods to process those 32 >samples... > >*if* all 32 input samples are guaranteed to be valid at the beginning of >your block of time > >and *if* you need not guarantee the validity of any of the 32 output >samples until the very end of your block of time, > >*then* your processing delay must be at least 64 samples.
I would say the latency is ( 32 + (runtime)) samples, where "runtime" is the amount of time required to compute 32 output samples, and conceivably could be quite small (e.g. 1 or 2 samples). Steve
Reply by robert bristow-johnson May 19, 20152015-05-19
On 5/18/15 8:04 PM, Les Cargill wrote:
>
...
> > For the aforementioned VST interface, it's more than that. I think > the smallest my ASIO over USB interface goes to is 32 samples. > > I don't think you have a 2B problem in addition to that, though. >
*if* samples are being processed in 32 sample blocks, which means you have a block of time equal to 32 sampling periods to process those 32 samples... *if* all 32 input samples are guaranteed to be valid at the beginning of your block of time and *if* you need not guarantee the validity of any of the 32 output samples until the very end of your block of time, *then* your processing delay must be at least 64 samples. L8r, -- r b-j rbj@audioimagination.com "Imagination is more important than knowledge."
Reply by Les Cargill May 18, 20152015-05-18
robert bristow-johnson wrote:
> On 5/16/15 8:35 PM, Les Cargill wrote: >> robert bristow-johnson wrote: >>> On 5/15/15 8:13 PM, Les Cargill wrote: >>>> robert bristow-johnson wrote: >>> ... >>>>> >>>>> this paper has, as best that i can tell, the least amount of "latency" >>>>> between the time it was first presented at an AES Convention (Nov >>>>> 1994) >>>>> and when published in the Journal (Mar 1995). like 4 friggin' months! >>>> >>>> FLAM! ;) >>>> >>>>> that's pretty impressive. i've never seen nor heard of any other >>>>> convention paper go from initial presentation to journal >>>>> publication in >>>>> that short period of time. >>>>> >>>>> >>>> >>>> It's one of the coolest algorithms I have ever read. >>>> >>> >>> well, LakeDSP (from down-under) patented it, but that patent i think has >>> expired by now. they beat Bill to the patent office but Bill first >>> published it. >>> >>> there are at least two persons other than either Bill or any of the >>> Aussies that told me of the same idea. it's one thing to conceive it, >>> it's another thing to make it work. but, in Bill's paper, i dunno how i >>> would do it. i don't see how the first buffer of samples can be >>> processed in time for them to be overlapped and added into the final >>> output. >> >> For a convolution reverb, just sacrifice the first block - pass >> it through as zeros or as the original samples. For most DAWs >> that should be less than a millisecond's loss. Almost certainly < 10 ma. >> > > this is not a problem of initialization. it's a problem of keeping the > pipelines full of samples in a real-time process. > > the whole issue is what time is necessary to first guarantee that *all* > of the input samples of the block are ready (before you pass them to the > FFT) and then to guarantee, with the fewest of assumptions, that the > output samples of the block are ready to add them up with the output > samples of the other FIR segments. > > first you have to wait until all B samples are input before you can say > your input block is good. that's one block delay. > > then, the least assumption is that your fast-convolution (FFT, > H[k]*X[k], iFFT) must get done before the *next* block of input is > coming in. that's only assuming real-time operation, that your CPU will > be fast enough for that, and *nothing* else. you cannot count on any of > the output samples being good until the end of that block period, *then* > you can start adding them in. that's the second block delay, another B > samples of delay. *then* you're ready to add those samples in with > those coming from other FIR segments. > > that's double buffering. if you had an infinitely fast computer, you > could do this with a single block delay. if you make some *valid* > assumptions about how fast you can expect your FIR segment convolution > to get done processing your block, you can get a delay for that block > that is somewhere in between B and 2B. > > but i wouldn't make that kind of assumption in a multi-threaded > environment and i wouldn't likely do this low-latency fast convolution > (a la Gardner) in anything other than a multi-threaded environment. > *then* the only assumption i can make is that the fast-convolver for a > particular FIR segment will get its job done in time before the next > block is input for it and after its own block is input. that's double > buffering and the minimum delay is 2B samples. so that portion or > partition or segment of the FIR (whatever we call it) *must* be delayed > by at least 2B samples. >
That well could be. I'd almost have to go through the exercise to know for sure. I am thinking that it is more like 1B worth of samples at first are simply lost. You have at least had the opportunity to do all processing in terms of data availability by the time the second block is received. And I am totally waving hands at the actual hardware interface.
>> Either that or you use brute force, which is good enough for small >> vectors. > > by that, i presume you mean the non-FFT, regular non-fast convolution > method. > >>> it's because i can't see a good way to do it other than >>> double-buffering. single-buffering requires that the output be ready >>> instantly after the buffer of input samples is formed. like infinitely >>> fast DSP bandwidth. >>> >> >> So you have a rotating/ring buffer... maybe it's two buffers long. > > yes, it is. and you ping-pong to the two buffers, which is the same as > using one half of the ring buffer while the other half is being I/O. > >> >> I am biased by the VST plugin interface, in which you get a sample and >> return a sample. > > now, the only thing that *could* be hardware specific is that a sample > comes in from the A/D simultaneously when another sample goes out to the > D/A. in a real-time environment, and assuming *only* that your CPU is > fast enough to process and return that output sample by the end of the > sampling period, how much is the latency? it must be 2 samples.
For the aforementioned VST interface, it's more than that. I think the smallest my ASIO over USB interface goes to is 32 samples. I don't think you have a 2B problem in addition to that, though. Aty least for Windows, the plugin threads run on whatever it is they call the "multimedia" timer driven interface these days - it's not that far from real time. Obviously, this can break down.
> the > sample output to the D/A is the sample you returned in the previous > sample time. and the input sample you are presented with at the > beginning of your sampling peried was obtained from from the previous > sample time. > > in real-time, if at the beginning of your sampling period, you are > guaranteed the correctness of your input sample, and if you do *not* > have to guarantee the correctness of your output sample until the end of > your sampling period, you have two samples delay. even if the algorithm > is a wire. now the output of a wire can be computed right away at the > beginning of the sampling period, but the system does not *assume* that > that output is ready until the end of the sampling period. > >> You'd still need an intermediate buffer as an accumulator. >> >> When it's time t for sample s[t], you output it. > > i understand all that. even though i have not (yet) done a > convolutional reverb a la Gardner, i *have* done a couple of > fast-convolution implementations since grad school. > > >
I've only done a couple, but they weren't real-time. -- Les Cargill
Reply by robert bristow-johnson May 17, 20152015-05-17
On 5/16/15 8:35 PM, Les Cargill wrote:
> robert bristow-johnson wrote: >> On 5/15/15 8:13 PM, Les Cargill wrote: >>> robert bristow-johnson wrote: >> ... >>>> >>>> this paper has, as best that i can tell, the least amount of "latency" >>>> between the time it was first presented at an AES Convention (Nov 1994) >>>> and when published in the Journal (Mar 1995). like 4 friggin' months! >>> >>> FLAM! ;) >>> >>>> that's pretty impressive. i've never seen nor heard of any other >>>> convention paper go from initial presentation to journal publication in >>>> that short period of time. >>>> >>>> >>> >>> It's one of the coolest algorithms I have ever read. >>> >> >> well, LakeDSP (from down-under) patented it, but that patent i think has >> expired by now. they beat Bill to the patent office but Bill first >> published it. >> >> there are at least two persons other than either Bill or any of the >> Aussies that told me of the same idea. it's one thing to conceive it, >> it's another thing to make it work. but, in Bill's paper, i dunno how i >> would do it. i don't see how the first buffer of samples can be >> processed in time for them to be overlapped and added into the final >> output. > > For a convolution reverb, just sacrifice the first block - pass > it through as zeros or as the original samples. For most DAWs > that should be less than a millisecond's loss. Almost certainly < 10 ma. >
this is not a problem of initialization. it's a problem of keeping the pipelines full of samples in a real-time process. the whole issue is what time is necessary to first guarantee that *all* of the input samples of the block are ready (before you pass them to the FFT) and then to guarantee, with the fewest of assumptions, that the output samples of the block are ready to add them up with the output samples of the other FIR segments. first you have to wait until all B samples are input before you can say your input block is good. that's one block delay. then, the least assumption is that your fast-convolution (FFT, H[k]*X[k], iFFT) must get done before the *next* block of input is coming in. that's only assuming real-time operation, that your CPU will be fast enough for that, and *nothing* else. you cannot count on any of the output samples being good until the end of that block period, *then* you can start adding them in. that's the second block delay, another B samples of delay. *then* you're ready to add those samples in with those coming from other FIR segments. that's double buffering. if you had an infinitely fast computer, you could do this with a single block delay. if you make some *valid* assumptions about how fast you can expect your FIR segment convolution to get done processing your block, you can get a delay for that block that is somewhere in between B and 2B. but i wouldn't make that kind of assumption in a multi-threaded environment and i wouldn't likely do this low-latency fast convolution (a la Gardner) in anything other than a multi-threaded environment. *then* the only assumption i can make is that the fast-convolver for a particular FIR segment will get its job done in time before the next block is input for it and after its own block is input. that's double buffering and the minimum delay is 2B samples. so that portion or partition or segment of the FIR (whatever we call it) *must* be delayed by at least 2B samples.
> Either that or you use brute force, which is good enough for small > vectors.
by that, i presume you mean the non-FFT, regular non-fast convolution method.
>> it's because i can't see a good way to do it other than >> double-buffering. single-buffering requires that the output be ready >> instantly after the buffer of input samples is formed. like infinitely >> fast DSP bandwidth. >> > > So you have a rotating/ring buffer... maybe it's two buffers long.
yes, it is. and you ping-pong to the two buffers, which is the same as using one half of the ring buffer while the other half is being I/O.
> > I am biased by the VST plugin interface, in which you get a sample and > return a sample.
now, the only thing that *could* be hardware specific is that a sample comes in from the A/D simultaneously when another sample goes out to the D/A. in a real-time environment, and assuming *only* that your CPU is fast enough to process and return that output sample by the end of the sampling period, how much is the latency? it must be 2 samples. the sample output to the D/A is the sample you returned in the previous sample time. and the input sample you are presented with at the beginning of your sampling peried was obtained from from the previous sample time. in real-time, if at the beginning of your sampling period, you are guaranteed the correctness of your input sample, and if you do *not* have to guarantee the correctness of your output sample until the end of your sampling period, you have two samples delay. even if the algorithm is a wire. now the output of a wire can be computed right away at the beginning of the sampling period, but the system does not *assume* that that output is ready until the end of the sampling period.
> You'd still need an intermediate buffer as an accumulator. > > When it's time t for sample s[t], you output it.
i understand all that. even though i have not (yet) done a convolutional reverb a la Gardner, i *have* done a couple of fast-convolution implementations since grad school. -- r b-j rbj@audioimagination.com "Imagination is more important than knowledge."