Reply by Max November 17, 20162016-11-17
On Tue, 15 Nov 2016 06:33:39 -0800 (PST), Bruno Afonso
<bafonso@gmail.com> wrote:

>On Saturday, September 17, 2016 at 1:52:59 PM UTC-4, Max wrote: > >> A lot of the CPU-economical algorithms that I've seen call for slow >> variations in delays, etc to avoid perception of repeating patterns or >> resonances. I haven't experimented with that aspect, so I don't know >> if the LFO patterns themselves could be perceived. It seems like it >> could lend an unrealistic sound though. >> >> The techniques that I've been curious about are: >> FDN: Feedback Delay Networks >> DWN: Digital Waveguide Networks >> Waveguide Mesh methods >> SDN: Scattering Delay Networks >> >> The latter was featured in a paper by Enzo De Sena and Julius O Smoth, >> "Efficient Synthesis of Room Acoustic via Scattering Delay Networks": >> >> https://arxiv.org/pdf/1502.05751.pdf >> >> That references other papers by Vorlander, Savioja, and others, that >> feature some of the other approaches above. >> >> I was curious about whether anyone has experimented with those. > >Seems interesting but unfortunately one needs to buy the article to access the media. How does it sound?
The paper came up for me back when I was initially searching for info. Maybe they've locked it down since then. SDNs are a simplified version of waveguide networks, and given the objective of running in realtime (video games, VR, etc), I was surprised that it would work at all in realtime. DWN's are notorious CPU hogs. I have seen some sites that had samples online. Don't have the URLs handy at the moment but I'll try to locate them again later. I thought the sound was pretty good, considering that they're only accurately simulating the first wall reflections. I'm not sure how they're doing subsequent 2nd order and higher, but I get the impression that they reuse the same waveguide that was used for the first reflections. I'd love to know more about how that could work. I've also heard one demo where it seemed like percussive events would trigger resonant peaks. So perhaps more video game apps than hifi, or maybe the demo itself had problems.
Reply by Bruno Afonso November 15, 20162016-11-15
On Saturday, September 17, 2016 at 1:52:59 PM UTC-4, Max wrote:

> A lot of the CPU-economical algorithms that I've seen call for slow > variations in delays, etc to avoid perception of repeating patterns or > resonances. I haven't experimented with that aspect, so I don't know > if the LFO patterns themselves could be perceived. It seems like it > could lend an unrealistic sound though. > > The techniques that I've been curious about are: > FDN: Feedback Delay Networks > DWN: Digital Waveguide Networks > Waveguide Mesh methods > SDN: Scattering Delay Networks > > The latter was featured in a paper by Enzo De Sena and Julius O Smoth, > "Efficient Synthesis of Room Acoustic via Scattering Delay Networks": > > https://arxiv.org/pdf/1502.05751.pdf > > That references other papers by Vorlander, Savioja, and others, that > feature some of the other approaches above. > > I was curious about whether anyone has experimented with those.
Seems interesting but unfortunately one needs to buy the article to access the media. How does it sound?
Reply by Steve Pope September 19, 20162016-09-19
Max  <Max@sorrynope.com> wrote:

>On Fri, 16 Sep 2016 11:41:39 -0700 (PDT), makolber@yahoo.com wrote:
>>I don't understand how diffuse delays are created in DSP. In a room, I >>can visualize a sound impulse being reflected off a slanted surface in >>a way that the reflected impulse is diffuse. How is this done in DSP? >>Is it simply many many discrete taps close together that approximate a >>diffuse delay or is it something else? >>
>Hi Mark,
>Robert has already explained this, but in terms of convolutional >reverbs-- Often a blank pistol, a spark, or some other real-life >substitute for an impulse is used to collect the characteristic >response of a room, cathedral, or whatever. Presuming LTI response, >that can be used to reproduce the acoustic characteristics of the >tested space.
I see a distinction between diffuse reflections, and echo density. Echo density is an overall metric in units of reflections per second and might be uniform along the time axis,, whereas diffuse reflections are individually spread along the time axis but are still distinct reflections. In either case, the characteristic can be simulted/implemented in discrete time. Steve
Reply by Max September 18, 20162016-09-18
On Fri, 16 Sep 2016 11:41:39 -0700 (PDT), makolber@yahoo.com wrote:

>I don't understand how diffuse delays are created in DSP. In a room, I can visualize a sound impulse being reflected off a slanted surface in a way that the reflected impulse is diffuse. How is this done in DSP? Is it simply many many discrete taps close together that approximate a diffuse delay or is it something else? > >thanks > >Mark
Hi Mark, Robert has already explained this, but in terms of convolutional reverbs-- Often a blank pistol, a spark, or some other real-life substitute for an impulse is used to collect the characteristic response of a room, cathedral, or whatever. Presuming LTI response, that can be used to reproduce the acoustic characteristics of the tested space. However, that's pretty CPU-expensive, so I was inquiring about perceptual/modeling approaches, which Robert has described. As well as the papers that you'll find via Google, there's a book by Will Pirkle, "Designing Adio Effect Plug-ins in C++" that has info on reverbs, and even includes a working environment that can both play back samples thru the code, or create VST plugins for use with music composition programs like Cubase.
Reply by Max September 18, 20162016-09-18
On Wed, 14 Sep 2016 09:54:21 -0400, Max <Max@sorrynope.com> wrote:

>I know that reverbs are normally built with all-pass and/or comb >filters in various combinations. Are there topologies that are more >CPU-efficient, or more realistic? I'm interested in small room >emulation, but I need to keep up in real-time, and processing power >may be limited. > >Obviously many common algorithms are just refinements of the the old >Schroder algorithm, but that's a primitive and metallic sound. There >must be efficient modern algorithms that solve those problems. Any >recommendations?
PS: A ref to another JOS paper on Digital Waveguide methods: http://www.ece.uvic.ca/~bctill/papers/numacoust/Smith_Rocchesso_1997.pdf I'm currently trying to parse various papers on DWN's, DWM's, Scattering Delay Networks. It seems that they're all advances on the tradiitional modeling approaches with all-pass and comb filters. SDN's in particular are intriguing because they're supposedly CPU-economical, and their objective is accurate modeling of initial reflections, with increasing approximation for less critical higher order reflections. The end result should be way more accurate for room modeling without the CPU expense of convolution. But even the terminology is not mainstream (at least from what I can discern). Math symbology is different from the norm for DSP references. They must be building on math models outlined in pioneering papers that I haven't seen yet. So they're not an easy read by any means. Has anyone here experimented with this stuff? Robert?
Reply by Max September 17, 20162016-09-17
On Fri, 16 Sep 2016 11:09:13 -0700 (PDT), robert bristow-johnson
<rbj@audioimagination.com> wrote:

>On Friday, September 16, 2016 at 9:33:13 AM UTC-4, Randy Yates wrote: >> robert bristow-johnson <rbj@audioimagination.com> writes: >> > [...] >> > a good reverb is not all that inexpensive to implement. >> >> That's what I've been thinking. Needs a lot of memory and a lot of >> processing power for long convolutions, even using frequency-domain >> filtering. >> > >i didn't mean those convolutional reverbs. of course they are expensive and fast-convolution (using FFT) is the only way to do them real time. otherwise you have something like a quarter million FIR taps.
I think part of the disparity in this thread has to do with convolutional vs modeling (perceptual) approaches. I was discounting convolutional approaches simply due to CPU load, but that's a good point about memory as well. It would also be nice to have continuously variable control over parameters with minimal audio glitching, which is probably not in the domain of convos.
> >> > and then to keep the room modes at bay, you might need to slowly move >> > the taps on a couple of the delays.
A lot of the CPU-economical algorithms that I've seen call for slow variations in delays, etc to avoid perception of repeating patterns or resonances. I haven't experimented with that aspect, so I don't know if the LFO patterns themselves could be perceived. It seems like it could lend an unrealistic sound though. The techniques that I've been curious about are: FDN: Feedback Delay Networks DWN: Digital Waveguide Networks Waveguide Mesh methods SDN: Scattering Delay Networks The latter was featured in a paper by Enzo De Sena and Julius O Smoth, "Efficient Synthesis of Room Acoustic via Scattering Delay Networks": https://arxiv.org/pdf/1502.05751.pdf That references other papers by Vorlander, Savioja, and others, that feature some of the other approaches above. I was curious about whether anyone has experimented with those.
Reply by Steve Pope September 16, 20162016-09-16
On Friday, September 16, 2016 at 2:41:43 PM UTC-4, mako...@yahoo.com wrote:

> I don't understand how diffuse delays are created in DSP. In a room, > I can visualize a sound impulse being reflected off a slanted surface > in a way that the reflected impulse is diffuse. How is this done in > DSP? Is it simply many many discrete taps close together that > approximate a diffuse delay or is it something else?
Given a sampled-time source, one can convolve the source with any (known) continuous-time impulse response, and sample the result, without either incurring approximations or requiring infinite computation. (If this is not obvious I can expand on why this his true.) So there is no theoretical problem with creating an LTI digital reverb that is "diffuse" in the sense you describe. RB-J's wandering tap delays may be more practical -- I am not sure, I have never used that approach. On the practicality/cost question, my premise is if you could sell a digital reverb for under $10,000 in 1980 (the last time I worked on them commercially), it does not require a lot of resources in modern terms. Steve
Reply by robert bristow-johnson September 16, 20162016-09-16
On Friday, September 16, 2016 at 2:41:43 PM UTC-4, mako...@yahoo.com wrote:
> >=20 > > there are other reverb algs that they used with DSP hardware that does =
not model a **specific** room. but is an LTI that models in generalities w= hat happens in what we might call a nondescript "good" room. like
> > 1. direct path > > 2. early reflections (simple multitap delay line) > > 3. a pre-delay before the reverberant reflections > > 4. a diffuse and roughly exponentially decaying diffuse reverberant ref=
lections.
> > >=20 > I read this newsgroup because it often presents a learning opportunity fo=
r me and here is such a case.
>=20 > re DSP reverbs. >=20 > and analogous to RF multi-path... >=20 > I understand how a DSP tapped delay line can provide discrete reflections=
of whatever amplitude and delay and number desired.
>=20 > I don't understand how diffuse delays are created in DSP. In a room, I c=
an visualize a sound impulse being reflected off a slanted surface in a wa= y that the reflected impulse is diffuse. How is this done in DSP? Is it s= imply many many discrete taps close together that approximate a diffuse del= ay or is it something else?
>=20
please take a look at this paper. http://freeverb3vst.osdn.jp/doc/matlab_reverb.pdf=20 Google Schroeder reverb, Jot reverb, Gardner reverb, Griesinger reverb. =20 the way that diffusion of the impulse response happens is with all-pass fil= ters having disparate delays inside (some people think that the delays shou= ld be related with mutual prime integer samples) being cascaded. or even w= ith one all-pass filter being contained inside another (an APF+delay replac= ing the delay inside another APF). so one impulse becomes several happenin= g at weird time delays. those feedback and cause *more* impulses at even w= eirder time delays. something like that. r b-j
Reply by September 16, 20162016-09-16
>=20 > there are other reverb algs that they used with DSP hardware that does no=
t model a **specific** room. but is an LTI that models in generalities wha= t happens in what we might call a nondescript "good" room. like
> 1. direct path > 2. early reflections (simple multitap delay line) > 3. a pre-delay before the reverberant reflections > 4. a diffuse and roughly exponentially decaying diffuse reverberant refle=
ctions.
>
To all I read this newsgroup because it often presents a learning opportunity for = me and here is such a case. re DSP reverbs. and analogous to RF multi-path... I understand how a DSP tapped delay line can provide discrete reflections o= f whatever amplitude and delay and number desired. I don't understand how diffuse delays are created in DSP. In a room, I can= visualize a sound impulse being reflected off a slanted surface in a way = that the reflected impulse is diffuse. How is this done in DSP? Is it sim= ply many many discrete taps close together that approximate a diffuse delay= or is it something else? thanks Mark
Reply by robert bristow-johnson September 16, 20162016-09-16
On Friday, September 16, 2016 at 9:33:13 AM UTC-4, Randy Yates wrote:
> robert bristow-johnson <rbj@audioimagination.com> writes: > > [...] > > a good reverb is not all that inexpensive to implement.=20 >=20 > That's what I've been thinking. Needs a lot of memory and a lot of > processing power for long convolutions, even using frequency-domain > filtering. >=20
i didn't mean those convolutional reverbs. of course they are expensive an= d fast-convolution (using FFT) is the only way to do them real time. other= wise you have something like a quarter million FIR taps.
> > and then to keep the room modes at bay, you might need to slowly move > > the taps on a couple of the delays. >=20 > Are you talking about the room modes of the modeled room (the one the > impulse response being convolved describes)?
there are other reverb algs that they used with DSP hardware that does not = model a **specific** room. but is an LTI that models in generalities what = happens in what we might call a nondescript "good" room. like 1. direct path 2. early reflections (simple multitap delay line) 3. a pre-delay before the reverberant reflections 4. a diffuse and roughly exponentially decaying diffuse reverberant reflect= ions. that's sorta what the Schroeder comb and APF design and the Jot multiple fe= edback design do. it sounds like a room, but not any specific room. and still, because the model has many fewer parameters than does a general = convolutional reverb, the danger of resonating room modes is greater and on= e way to deal with that is to slowly move the taps of a couple of delays in= feedback. usually make one get longer while the other gets shorter so tha= t there is no pitch bias. this, of course makes the system time-variant, b= ut you can't really tell so. if you want Randy, i can send you a C file of a Jot 'verb i did back in the= previous decade to give you an idea. (and since it doesn't slide the taps= around, the alg is LTI.) lemme know. r b-j