[time-nuts] Limitations of Allan Variance applied to frequency divided signal?

Tijd Dingen tijddingen at yahoo.com
Sun May 15 20:01:41 UTC 2011


Hi Magnus,

Magnus Danielson wrote:
>>> Notice that the pre-scaler is only used for higher frequencies.

>> Understood. I was just using the prescaler as an example for the "what if
>> if take every Nth edge".

> Consider then the typical measurement setup:

> A counter is set up to make a time interval measurement from channel A 
> to channel B on each occurrence of a external arm trigger. Consider that 
> a GPS provides a PPS pulse to the external arm input and a 10 MHz to the 
> channel A. The DUT provides a 10 MHz to the channel B.

> In this setup it will be 10 milion cycles on the channel A and B. This 
> is not a problem for ADEV/AVAR. The tau will be that of 1 s or integer 
> multiples thereof.

> However, if you want a quality measure at 1 s then you better measure at 
> a higher speed of say 1 kHz in order to get higher amount of data 
> without having to way veeery long. Algorithmic improvements have been 
> done to achieve higher quality quicker on the same data. Overlapping 
> measures make fair use of data for shorter taus.

Check. That is what I understood the "Overlapped variable tau estimators"
bit on wikipedia to be about. Same raw data, smarter processing.


> Notice that you need to adjust your data for cycle-slips. If you don't 
> do that you will get a significant performance hit with typical several 
> decades higher ADEV curve than expected.

"Adjust for cycle-slips"... You mean the following ... ?

Your processing back-end receives a series of timestamps from the timestamper.
The timestamper claims "this is the timestamp for cycle number XYZ. No, really!".
However you notice that given the distribution of all the other (cycle_no, time)
pairs this would be hard to believe. If however you would assume add +1 to that
"claimed" cycle number, then it would perfectly. So you adjust the cycle number
by one, under the assumption that /somewhere/ 1 cycle got lost. Somewhere being
a PLL cycle slip, an fpga counter missing a count, etc...

That sort of adjustment I take it? If yes, then understood. If not, I'm afraid
I don't follow. :)


>>> You never time-stamp individual cycles anyway, so a pre-scaler doesn't do
>>> much difference. It does limit the granularity of the tau values you use, but
>>> usually not in a significant way since Allan variance is rarely used for taus
>>> shorter than 100 ms and well... pre-scaling usually is below 100 ns so it
>>> isn't a big difference.

>> Well, I can certainly /try/ to be able to timestamp individual cycles. ;) That way
>> I can for example characterize oscillator startup and such. Right now I can only
>> spit out a medium resolution timestamp every cycle for frequencies up to about
>> 400 Mhz, and a high resolution timestamp every cycle for frequencies up to
>> about 20 MHz.

>> Medium resolution being on the order of 100 ps, and high resolution being on
>> the order of 10 ps. The medium resolution is possibly even a little worse than
>> that due to non-linearities, but there is still a few ways to improve that. Just
>> requires an aweful lot of design handholding to manually route parts of the
>> fpga design. I.e: "I will do that later. much much later". ;->

>> But understood, for Allan variance you don't need timestamps for every indivual
>> cycle.

> No. Certainly not.

> I do lack one rate in your discussion, your time-stamp rate, i.e. the 
> maximum sample-rate you can handle, being limited to minimum time 
> between two measurements. For instance, a HP5372A has a maximum sample 
> rate of 10 MS/s in normal mode (100 ns to store a sample) while in fast 
> mode it can do 13,33 MS/s (75 ns to store a sample). The interpolator 
> uses a delay architecture to provide quick turn-around interpolation 
> which gives only 200 ps resolution (100 ps resolution is supported in 
> the architecture if only boards would be designed for it, so there is a 
> hidden upgrade which never came about).

> Do you mean to say that your low resolution time-stamping rate is 400 
> MS/s and high resolution time-stamping rate is 20 MS/s?

That is what I mean to say. There are still design issues with both modes,
so could become better, could become worse. Knowing how reality works,
probably worse. ;-> But those numbers are roughly it yes.

At the current stage: 200 MS/s at the lower resolution is easy. 400 MS/s
is trickier.

The reason: 400 MHz is the full tap sampling speed, and I can barely keep
up with the data. The data is from a 192 tap delay line incidentally.
Active length is typically about 130 taps, but I have to design worst case.
Or rather best case, because needing all those taps to fit a 2.5 ns cycle
would be really good news. ;) But hey, we can always hope for fast silicon
right?

Anyways, the first 5 pipeline stages right after the taps work at 400 MHz.
The second part (as it happens also 5 stages) works at 200 MHz. If only
for the simple reason that the block ram in a spartan-6 has a max frequency
of about 280 MHz. So the 200 MHz pipeline processes 2 timestamps in parallel.

For this part of the processing I have spent more design effort on the modules
that are responsible for the high resolution timestamps. So low resolution,
200 MS/s == done. 400 Ms/s == working on it. :P


> It is perfectly respectable to skip a number of cycles, but the number 
> of cycles must be known. One way is to have an event-counter which is 
> sampled, or you always provide samples at a fixed distance 
> event-counter-wise such that the event-counter can be rebuilt 
> afterwards. The later method save data, but have the draw-back that your 
> observation period becomes dependent on the frequency of the signal 
> which may or may not be what you want, depending on your application.

What I have now is an event counter which is sampled.


> Recall, you will have to store and process this flood of data. For 
> higher tau plots you will wade in sufficiently high amounts of data 
> anyway, so dropping high frequency data to achieve a more manageable 
> data rate in order to be able to store and process the longer tau data 
> is needed.

Heh, "store and process this flood of data" is the reason why I'm at
revision numero 3 for the frigging taps processor. :P But oh well,
good for my pipeline design skills.


> For most of the ADEV plots on stability, starting at 100 ms or 1 s is 
> perfectly useful, so a measurement rate of 10 S/s is acceptable.

Well, that would be too easy. Where's the fun in that?

> For high speed things like startup burps etc. you have a different 
> requirement race. A counter capable of doing both will be great, but 
> they usually don't do it.

Check. For me the main purpose of this thing is:
1 - learn new things
2 - be able to measure frequency with accuracy comparable to current commercial counters
3 - monitor frequency stability

Anyways, for now the mandate is to be able to spit out as many timestamps as I can get away
with, and then figure out fun ways to process them. ;)


regards,
Fred


More information about the time-nuts mailing list