[time-nuts] Question about frequency counter testing

Bob kb8tq kb8tq at n1k.org
Fri May 11 11:35:50 EDT 2018


Hi

If you do the weighted average as indicated in the paper *and* compare it to a “single sample” computation, 
the results are different for that time interval. To me that’s a problem. To the authors, the fact that the rest of
the curve is the same is proof that it works. I certainly agree that once you get to longer tau, the process 
has no detrimental impact. There is still the problem that the first post on the graph is different depending 
on the technique. 

The other side of all this is that ADEV is really not a very good way to test a counter. It has it’s quirks and it’s
issues. They are impacted by what is in a counter,  but that’s a side effect. If one is after a general test of 
counter hardware, one probably should look at other approaches.

If you are trying specifically just to measure ADEV, then there are a lot of ways to do that by it’s self. It’s not
clear that re-invinting the hardware is required to do this. Going with an “average down” approach ultimately
*will* have problems for certain signals and noise profiles. 

Bob

> On May 11, 2018, at 10:42 AM, Oleg Skydan <olegskydan at gmail.com> wrote:
> 
> Hi
> 
> --------------------------------------------------
> From: "Bob kb8tq" <kb8tq at n1k.org>
>> The most accurate answer is always “that depends”. The simple answer is no.
> 
> I have spent the yesterday evening and quite a bit of the night :) reading many interesting papers and several related discussions in the time-nuts archive (the Magnus Danielson posts in "Modified Allan Deviation and counter averaging" and "Omega counters and Parabolic Variance (PVAR)" topics were very informative and helpful, thanks!).
> 
> It looks like the trick to combine averaging with the possibility of correct ADEV calculation in the post processing exists. There is a nice presentation made by prof. Rubiola [1]. There is a suitable solution on page 54 (at least I understood it so, maybe I am wrong). I can switch to usual averaging (Lambda/Delta counter) instead of LR calculation (Omega counter), the losses should be very small I my case. With such averaging the MDEV can be correctly computed. If ADEV is needed, the averaging interval can be reduced and several measurements (more then eight) can be combined into one point (creating the new weighting function which resembles the usual Pi one, as shown in the [1] p.54), it should be possible to calculate usual ADEV using such data. As far as I understand, the filter which is formed by the resulting weighting function will have wider bandwidth, so the impact on ADEV will be smaller and it can be computed correctly. Am I missing something?
> 
> I have made the necessary changes in code, now firmware computes the Delta averaging, also it computes combined Delta averaged measurements (resulting in trapezoidal weighting function), both numbers are computed with continuous stamping and optimal overlapping. Everything is done in real time. I did some tests. The results are very similar to the ones made with LR counting.
> 
> [1] http://www.rubiola.org/pdf-slides/2012T-IFCS-Counters.pdf
>   E. Rubiola, High resolution time and frequency counters, updated version.
> 
> All the best!
> Oleg UR3IQO 
> _______________________________________________
> time-nuts mailing list -- time-nuts at febo.com
> To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
> and follow the instructions there.



More information about the time-nuts mailing list