[time-nuts] Allan variance by sine-wave fitting

Magnus Danielson magnus at rubidium.dyndns.org
Mon Nov 27 18:44:12 EST 2017


Hi Jim,

On 11/28/2017 12:03 AM, jimlux wrote:
> On 11/27/17 2:45 PM, Magnus Danielson wrote:
> 
>>
>> There is nothing wrong about attempting new approaches, or even just 
>> test and idea and see how it pans out. You should then compare it to a 
>> number of other approaches, and as you test things, you should analyze 
>> the same data with different methods. Prototyping that in Python is 
>> fine, but in order to analyze it, you need to be careful about the 
>> details.
>>
>> I would consider one just doing the measurements and then try 
>> different post-processings and see how those vary.
>> Another paper then takes up on that and attempts analysis that matches 
>> the numbers from actual measurements.
>>
>> So, we might provide tough love, but there is a bit of experience 
>> behind it, so it should be listened to carefully.
>>
> 
> 
> It is tough to come up with good artificial test data - the literature 
> on generating "noise samples" is significantly thinner than the 
> literature on measuring the noise.

Agree completely. It's really the 1/f flicker noise which is hard.
The white phase and frequency noise forms is trivial in comparison, but 
also needs its care to detail.

Enough gaussian is sometimes harder than elsewhere. I always try to 
consider it a possible limitation.

Enough random is another issue. What is the length of the noise source, 
what is the characteristics?

> When it comes to measuring actual signals with actual ADCs, there's also 
> a number of traps - you can design a nice approach, using the SNR/ENOB 
> data from the data sheet, and get seemingly good data.
> 
> The challenge is really in coming up with good *tests* of your 
> measurement technique that show that it really is giving you what you 
> think it is.
> 
> A trivial example is this (not a noise measuring problem, per se) -
> 
> You need to measure the power of a received signal - if the signal is 
> narrow band, and high SNR, then the bandwidth of the measuring system 
> (be it a FFT or conventional spectrum analyzer) doesn't make a lot of 
> difference - the precise filter shape is non-critical.  The noise power 
> that winds up in the measurement bandwidth is small, for instance.
> 
> But now, let's say that the signal is a bit wider band or lower SNR or 
> you're uncertain of its exact frequency, then the shape of the filter 
> starts to make a big difference.
> 
> Now, let’s look at a system where there’s some decimation involved - any 
> decimation raises the prospect of “out of band signals” aliasing into 
> the post decimation passband.  Now, all of a sudden, the filtering 
> before the decimator starts to become more important. And the number of 
> bits you have to carry starts being more important.

There is a risk of wasting bits too early when decimating. The trouble 
comes when the actual signal is way below the noise and you want to 
bring it out in post-processing, the limit of dynamic will haunt you.
This have been shown many times before.

Also, noise and quantization has an interesting interaction.

> It actually took a fair amount of work to *prove* that a system I was 
> working on
> a) accurately measured the signal (in the presence of other large signals)
> b) that there weren’t numerical issues causing the strong signal to show 
> up in the low level signal filter bins
> c) that the measured noise floor matched the expectation

It's tricky business indeed. The cross-correlation technique could 
potentially measure below it's own noise-floor. Turns out it was very 
very VERY hard to do that safely. It remains a research topic. At best 
we just barely got to work around the issue. That is indeed a high 
dynamic setup.

Cheers,
Magnus


More information about the time-nuts mailing list