[time-nuts] practical details on generating artificial flicker noise

Jim Lux jimlux at earthlink.net
Sun Nov 23 11:10:57 EST 2014


On 11/23/14, 7:21 AM, Magnus Danielson wrote:
> Jim,
>
> Find myself providing guidance in both the 2010 and 2013 threads, and
> they are still valid starting-points.
>
> For music synthesizer applications, flicker noise have been done, such
> as on this schematic:
> https://rubidium.dyndns.org/~magnus/synths/friends/stopp/asm1ns.pdf
> The work is traceable back to the Barnes-Jarvis work. Might be fun to
> know. :)
>
There's a bunch of schemes described at
http://www.firstpr.com.au/dsp/pink-noise/

Some of which look remarkably like the Barnes, Jarvis, Greenhall approaches.


> Anyway, yes, it would be reasonable that you would need that many
> sections if you really intend to cover the full range, but on the other
> hand, usually you have a corner from which white noise dominates, and
> you really don't need to do much more than an octave or two beyond that
> corner. Doing 16-17 sections is cheap today.

computationally, but any time I start down the path of implementing 
something where the literature has half a dozen stages and I'm going to 
be doubling or tripling that, you start to wonder about whether there's 
some numerical issue that will bite you.  After all, that difference 
equation for the lowest frequency cutoff, with the high sample rate, has 
coefficients that are very close to 1 (The Barnes & Greenhall paper 
appendix A shows a lot of zero values in the tabulated area, but they 
were using double precision and not printing all the digits)



>
> The other approach is to read Chuck Greenhalls more recent papers and
> see if none of those methods is applicable to your needs.
>
> Also, remember that in the Barnes-Jarvis approach, the distance between
> upper and lower corners is separated from how tight variation is
> allowed, which is controlling how many sections you need. Plotting with
> a scale normalized with sqrt(f) helps in analysis.

yes.. the examples in the paper make that pretty clear.. 4 sections 
spread over 6 decades gets you a fair amount of variation.

There's also, of course, all those notes about "selecting an appropriate 
starting point by trial and error".. Which is probably why they wrote 
the analysis part of their code: make a run with one value, look at the 
plot, hmm, change a value, make a run, etc.


Well.. I'm grinding through the implementation now.. in Python, as it 
happens, so I'm trying to figure out how to do it a Python-esque way, as 
opposed to my usual Fortran/Matlab in Python style.  Seems one should be 
able to have nice abstracted filter sections that you can iterate 
through, etc.

BTW, if anyone is going to implement the algorithm in the PTTI paper, 
you really need the Greenhall JPL report also, because a lot of the 
terminology and variables carry forward.

http://ipnpr.jpl.nasa.gov/progress_report/42-77/77M.PDF


(of course, as I think about it, if I need 60 seconds worth of samples 
at 1 kHz, that's only 60,000 samples, so I could just do it by 
generating 64k of white noise, FFTing, applying a -3dB/octave slope, and 
then inverse transforming..  And, since the FFT of white noise is white 
noise, it's really just taking N samples of white noise, applying the 
filter, and doing the transform to the time domain.)

(yes, the FFT method was discussed in the earlier threads)


More information about the time-nuts mailing list