[time-nuts] Dual Mixer

Bruce Griffiths bruce.griffiths at xtra.co.nz
Wed May 12 10:26:36 UTC 2010

WarrenS wrote:
> Bruce
> Good, It does seem like we are finally making some good progress.
> You now seem to acknowledge that my tester could work if I integrate.
> You now seem to acknowledge that I am integrating by using a filter.
In a sampled data system integration is equivalent to a filter but not 
just any arbitrary low pass filter.
The errors in your method are explicitly spelled out in the paper I gave 
the link to:
In this paper xi is a phase sample and yi is a frequency sample.
> I acknowledge that my integration method is not perfect, BUT it is 
> simple and good enough.
Not yet proven nor quantified.
> It would seem the only issue left is to show you just how good of 
> answers my integration method gives.
> At least now we are JUST talking about what the S/W needs to do.
> Hopefully you now see that the hardware is adequate.
> What would you consider an acceptable error band, 3 dB, 1 dB, 0.1 dB?  
> Pick a number >> zero.
The answer depends on how long one is willing to spend making the 
Certainly 0.1dB or better would require heroic efforts to demonstrate.
Since the error will also depend on the phase noise spectra of the 
oscillators being compared a single figure answer isnt feasible.
However for the case where white phase noise dominates the error should 
be not more than 1dB but potentially much less.
The errors due to digital signal processing should be at least an order 
of magnitude lower.
> For a typical high speed data log taken at say 1 K samples per second, 
> one would generally run a quick test with maybe a minute's worth of data.
> That would provide enough data to give a good tau plot up to about 10 
> seconds.
That's a rather sweeping statement given that no estimates of the 
contribution to measurement noise due to the finite number of samples 
has been made.
The maximum usable tau for a given record length depends on the maximum 
acceptable error due to the finite number of samples.
> Now if you can supply me with a 60K data log with any type of 
> reasonably typical noise that you want to include in it
> I'll show you how close my approximate Integration comes to your 
> perfect integration.
You can't because your method of perfect integration isnt and its errors 
cannot be made sufficiently small with so few samples.

> I can set this up to do as many times as you want, until I have 
> demonstrated by example that it is close enough,
> for every data log case that you will provide. Near enough IS good 
> enough for me and most Nuts.
Quantify near enough else all is just noise.
> As John pointed out, this is measuring noise. One is not going to get 
> the exact same answer twice in a row anyway.
> My answer will not be perfect, but it will be simple and fast and easy 
> and below the noise uncertainty band.
> Your turn to put a data log where your math is.  Do try and remember 
> I'm working with Frequency and not phase.
Thats idle speculation as you havent quantified anything at all.
The repeatability of the measurements needs to be quantified.
> BTW. just a heads-up warning to be fair. I have set up this situation 
> so that I can not loose.
Its actually almost trivial to produce a set of samples for which any 
given method will fail.
Doing so is an unproductive exercise.
> If you want to setup your own situation go for it. I'll see if I can 
> do it.
> Only requirement is that it should be broken down into no more than 
> 60K sample sizes max for each test at the start.
> After I pass that,  if you want to go for millions of samples or 
> whatever, fine as long as I can read the text data log file.
> ws

More information about the time-nuts mailing list