[time-nuts] Question about frequency counter testing

Bob kb8tq kb8tq at n1k.org
Fri Apr 27 13:22:50 EDT 2018


Hi

So what’s going on here? 

With any of a number of modern (and not so modern) FPGA’s you can run a clock in the 400 MHz region. 
Clocking with a single edge gives you a 2.5 ns resolution. On some parts, you are not limited to a single 
edge. You can clock with both the rising and falling edge of the clock. That gets you to 1.25 ns. For the 
brave, there is the ability to phase shift the clock and do the trick yet one more time. That can get you
to 0.6125 ns. You may indeed need to drive more than one input to get that done. 

As you get more and more fancy, the chip timing gets further into your data. A very simple analogy is
the non-uniform step size you see on an ADC. Effectively you have a number that has a +/- ?.?? sort
of tolerance on it. As before that may not what you expect in a frequency counter. It still does not mean
the the data is trash. You just have a source of error to contend with. 

You could also feed the data down a “wave union” style delay chain. That would get you into the 100ps
range with further linearity issues to contend with. There are also calibration issues as well as temperature
and voltage dependencies. Even the timing in the multi phase clock approach will have some voltage
and temperature dependency. 

Since it’s an FPGA, coming up with a lot of resources is not all that crazy expensive. You aren’t buying 
gate chips and laying out a PCB. A few thousand logic blocks is tiny by modern standards. Your counter
or delay line ideal might fit in < 100 logic blocks.  There’s lots of room for pipelines and I/O this and that. 
The practical limit is how much you want to put into the “pipe” that gets the data out of the FPGA.

In the end, you still are still stuck with the fact that many of the various TDC chips have higher resolution / lower cost. 
You also have a pretty big gap between raw chip price and what a fully developed instrument will run. 
That’s true regardless of what you base it on and how you do the design. 

Bob



> On Apr 26, 2018, at 5:28 PM, Oleg Skydan <olegskydan at gmail.com> wrote:
> 
> From: "Hal Murray" <hmurray at megapathdsl.net>
> Sent: Thursday, April 26, 2018 10:28 PM
> 
>> Is there a term for what I think you are doing?
> 
> I saw different terms like "omega counter" or multiple time-stamp
> average counter, probably there are others too.
> 
>> If I understand (big if), you are doing the digital version of magic
>> down-conversion with an A/D.  I can't even think of the name for that.
> 
> No, it is much simpler. The hardware saves time-stamps to the memory at
> each (event) rise of the input signal (let's consider we have digital logic
> input signal for simplicity). So after some time we have many pairs of
> {event number, time-stamp}. We can plot those pairs with event number on
> X-axis and time on Y-axis, now if we fit the line on that dataset the
> inverse slope of the line will correspond to the estimated frequency.
> 
> The line is fitted using linear regression.
> 
> This technique improves frequency uncertainty as
> 
> 2*sqrt(3)*tresolution/(MeasurementTime * sqrt(NumberOfEvents-2))
> 
> So If I have 2.5ns HW time resolution, and collect 5e6 events,
> processing should result in 3.9ps resolution.
> 
> Of cause this is for the ideal case. The first real life problem is
> signal drift for example.
> 
> Hope I was able to tell of what I am doing.
> 
> BTW, I have fixed a little bug in firmware and now ADEV looks a bit better.
> Probably I should look for better OCXOs. Interesting thing - the counter
> processed 300GB of time-stamps data during that 8+hour run :).
> 
> All the best!
> Oleg 
> <1133.png>_______________________________________________
> time-nuts mailing list -- time-nuts at febo.com
> To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
> and follow the instructions there.



More information about the time-nuts mailing list