DSPRelated.com
Forums

Generating 1/f^2 and 1/f^3 noise

Started by Marc Brooker March 9, 2007
On Mar 16, 1:38 pm, "Andor" <andor.bari...@gmail.com> wrote:
> 1. Why don't you include the titles of the referenced articles? That > would make searching for them a lot easier.
It all depends on the bibliography format of the journal. You're right that as long as we are posting a preprint we might as well use a more verbose format, since we don't have space limitations. I'll bug Alejandro about this.
> 2. You define the L2-norm of a vector (on page 3) as > > ||x||_L2 = sqrt(1/N sum_{k=1}^N x_k^2). > > This is the first time I see the factor sqrt(1/N) included in the > definition of the L2-norm.
It's not unusual to normalize a finite-dimensional norm, and this way it's just the root-mean-square deviation and so has a simple interpretation. If we didn't divide by N then the magnitude would not have an intuitive meaning because it would depend upon our frequency sampling. (Actually, dividing by N in this context is equivalent to an Euler approximation for the integrated norm, for a suitable choice of frequency units, which is really what we are talking about here. It might be clearer for us to simply define the norm as an integral.)
> 3. Fig. 4 shows how the filters perform with regard to the L2(|H|^2 - > R) error. Specifically, the IIR-global filters fare better than the > IIR-L2(b) filters (which are supposed to minimize exactly that error). > Perhaps you can try to use the IIR-global filter as a start value for > the gradient search to minimizie the L2(b) error (instead of the Yule- > Walker filter).
Yes, that's certainly something that you might try if you cared about minimizing the L2 norm. However, for the L2 norm there isn't any method that will guarantee a global minimum as far as I know, so everything is rather ad hoc. In our case, we care mostly about the Chebyshev norm, and as we explain in the paper this is likely to be the case in many applications.
> " > Filtering in the frequency > domain requires the entire data sequence yn to > be computed and stored in advance, and if many long > sequences are required the storage becomes prohibitive. > " > If I understand you correctly you are saying that frequency domain > filtering can only be applied to a single data block. This is not > quite correct. Using overlap methods (overlap-save or overlap-add), > y_n can be constructed from short blocks.
Yes, I'm quite aware of this. If you look later in the paper, you'll notice that we explicitly mention the block-based FFT methods for applying FIR filters. The sentence you are quoting was talking about filtering *entirely* in the frequency domain, i.e. it was in the context of just FFTing the whole data sequence. Doing overlap-add, I would argue, is not entirely in the frequency domain because you are transforming a sequence of time-windowed blocks. Moreover, it presumes that you have made an FIR approximation of the frequency response that you want. In any case, we should perhaps just be more explicit about what we mean, to avoid any possibility of confusion. Steven
Steven wrote:
> Andor wrote: > > > 1. Why don't you include the titles of the referenced articles? That > > would make searching for them a lot easier. > > It all depends on the bibliography format of the journal. You're > right that as long as we are posting a preprint we might as well use a > more verbose format, since we don't have space limitations. I'll bug > Alejandro about this. > > > 2. You define the L2-norm of a vector (on page 3) as > > > ||x||_L2 = sqrt(1/N sum_{k=1}^N x_k^2). > > > This is the first time I see the factor sqrt(1/N) included in the > > definition of the L2-norm. > > It's not unusual to normalize a finite-dimensional norm, and this way > it's just the root-mean-square deviation and so has a simple > interpretation. If we didn't divide by N then the magnitude would > not have an intuitive meaning because it would depend upon our > frequency sampling. > > (Actually, dividing by N in this context is equivalent to an Euler > approximation for the integrated norm, for a suitable choice of > frequency units, which is really what we are talking about here. It > might be clearer for us to simply define the norm as an integral.) > > > 3. Fig. 4 shows how the filters perform with regard to the L2(|H|^2 - > > R) error. Specifically, the IIR-global filters fare better than the > > IIR-L2(b) filters (which are supposed to minimize exactly that error). > > Perhaps you can try to use the IIR-global filter as a start value for > > the gradient search to minimizie the L2(b) error (instead of the Yule- > > Walker filter). > > Yes, that's certainly something that you might try if you cared about > minimizing the L2 norm. However, for the L2 norm there isn't any > method that will guarantee a global minimum as far as I know, so > everything is rather ad hoc. > > In our case, we care mostly about the Chebyshev norm, and as we > explain in the paper this is likely to be the case in many > applications. > > > " > > Filtering in the frequency > > domain requires the entire data sequence yn to > > be computed and stored in advance, and if many long > > sequences are required the storage becomes prohibitive. > > " > > If I understand you correctly you are saying that frequency domain > > filtering can only be applied to a single data block. This is not > > quite correct. Using overlap methods (overlap-save or overlap-add), > > y_n can be constructed from short blocks. > > Yes, I'm quite aware of this. If you look later in the paper, you'll > notice that we explicitly mention the block-based FFT methods for > applying FIR filters. > > The sentence you are quoting was talking about filtering *entirely* in > the frequency domain, i.e. it was in the context of just FFTing the > whole data sequence. Doing overlap-add, I would argue, is not > entirely in the frequency domain because you are transforming a > sequence of time-windowed blocks. Moreover, it presumes that you have > made an FIR approximation of the frequency response that you want. > > In any case, we should perhaps just be more explicit about what we > mean, to avoid any possibility of confusion.
Perhaps it's just a matter of content. Your target readership might find everything perfectly clear. Regards, Andor
On 18 Mrz., 11:53, "Andor" <andor.bari...@gmail.com> wrote:
> Steven wrote: > > Andor wrote: > > > > 1. Why don't you include the titles of the referenced articles? That > > > would make searching for them a lot easier. > > > It all depends on the bibliography format of the journal. You're > > right that as long as we are posting a preprint we might as well use a > > more verbose format, since we don't have space limitations. I'll bug > > Alejandro about this. > > > > 2. You define the L2-norm of a vector (on page 3) as > > > > ||x||_L2 = sqrt(1/N sum_{k=1}^N x_k^2). > > > > This is the first time I see the factor sqrt(1/N) included in the > > > definition of the L2-norm. > > > It's not unusual to normalize a finite-dimensional norm, and this way > > it's just the root-mean-square deviation and so has a simple > > interpretation. If we didn't divide by N then the magnitude would > > not have an intuitive meaning because it would depend upon our > > frequency sampling. > > > (Actually, dividing by N in this context is equivalent to an Euler > > approximation for the integrated norm, for a suitable choice of > > frequency units, which is really what we are talking about here. It > > might be clearer for us to simply define the norm as an integral.) > > > > 3. Fig. 4 shows how the filters perform with regard to the L2(|H|^2 - > > > R) error. Specifically, the IIR-global filters fare better than the > > > IIR-L2(b) filters (which are supposed to minimize exactly that error). > > > Perhaps you can try to use the IIR-global filter as a start value for > > > the gradient search to minimizie the L2(b) error (instead of the Yule- > > > Walker filter). > > > Yes, that's certainly something that you might try if you cared about > > minimizing the L2 norm. However, for the L2 norm there isn't any > > method that will guarantee a global minimum as far as I know, so > > everything is rather ad hoc. > > > In our case, we care mostly about the Chebyshev norm, and as we > > explain in the paper this is likely to be the case in many > > applications. > > > > " > > > Filtering in the frequency > > > domain requires the entire data sequence yn to > > > be computed and stored in advance, and if many long > > > sequences are required the storage becomes prohibitive. > > > " > > > If I understand you correctly you are saying that frequency domain > > > filtering can only be applied to a single data block. This is not > > > quite correct. Using overlap methods (overlap-save or overlap-add), > > > y_n can be constructed from short blocks. > > > Yes, I'm quite aware of this. If you look later in the paper, you'll > > notice that we explicitly mention the block-based FFT methods for > > applying FIR filters. > > > The sentence you are quoting was talking about filtering *entirely* in > > the frequency domain, i.e. it was in the context of just FFTing the > > whole data sequence. Doing overlap-add, I would argue, is not > > entirely in the frequency domain because you are transforming a > > sequence of time-windowed blocks. Moreover, it presumes that you have > > made an FIR approximation of the frequency response that you want. > > > In any case, we should perhaps just be more explicit about what we > > mean, to avoid any possibility of confusion. > > Perhaps it's just a matter of content.
I meant "conext".
Andor wrote:

> I meant "conext".
Context? Jerry -- Engineering is the art of making what you want from things you can get. &macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;&macr;
Jerry Avins wrote:
> Andor wrote: > > I meant "conext". > > Context?
I are one :-).
ray@desinformation.de a =E9crit :
> On 9 Mrz., 09:42, Marc Brooker <myrealn...@gmail.com> wrote: > > Hello, > > > > I am currently writing software to generate 1/f, 1/f^2 and 1/f^3 noise > > for use in a simulation, using an existing Gaussian PRNG. Currently, I > > am generating 1/f noise with the Voss-McCartney algorithm (from here:ht=
tp://www.firstpr.com.au/dsp/pink-noise/), and it seems to work
> > extremely well and perform well enough for my application. > > > > I am generating the 1/f^2 noise (noise with a rolloff of 20dB per > > decade) by passing the output of my Gaussian noise source through a > > single pole integrator. The output of this process matches the target > > rolloff very nicely. Is this a generally accepted way to generate 1/f^2 > > noise? > > > > Does anybody know of a more elegant way to generate 1/f^3 (30dB per > > octave) noise than pass the output of my 1/f noise generator through an > > integrator? > > > > I do it in a even more unelegant way (with a IDFT): > > I calculate the amplitude for whatever spectral distribution of the > "noise" i want. > > For example: amplitude=3D1/f^1.1 > > I "randomize" the phase with "random" numbers, do a IDFT. For Audio > sampled with 44100 Hz and a loop length of 131072 Samples it sounds > quite like "noise".
I do it kind of the same way to, generate the magnitude using whatever function I wish to use, and randomizing the phase. There's nothing wrong with doing that.
> Concentrating on this sound sometimes i seem to notice a certain > "rythm" which is the 44100Hz/131072 ?
Well if it's a loop and that you play it as such, you'll ovbiously hear the exact same thing every 3 seconds.
> I guess one could find out something about the audio perception with > his brain.
I don't see what it has to do with perception, I mean if you play the same thing in a loop it will necessarily sound like a loop, unless the thing you play is perfectly uniform, which is never the case with a noise.