[time-nuts] Notes on tight-PLL performance versus TSC 5120A

Bruce Griffiths bruce.griffiths at xtra.co.nz
Thu Jun 3 12:26:23 UTC 2010


WarrenS wrote:
> Bruce Posted
>
>    
>>   "Rectangular integration isn't particularly accurate or efficient, better techniques exist."
>>      
> True, but in this case it is the easiest and at these speeds, efficiency is not a big concern, It is made up for with faster oversampling.
> and it is obvious so far, better is not needed here, this is 'Good enough'.
> (and that answers your other question, why don't I do it better.)
>
>    
Why do that when its so easy to do much better?
> You do bring up an interesting point.   There is lots of things that could be (and have been) done better than on that  simple one IC BB circuit that was tested, and yet it was good enough to match the TSC 5120A pretty much point for point over the whole tau range and ADEV range.    (limited only by it's Ref Osc.)  Think KISS, enought said.
>
>
>    
You've entirely missed the point, such errors need to be quantified not 
swept under the carpet.
Trapezoidal integration is almost as simple as rectangular integration 
and comes at low cost.
However if you look at equation 44 in the paper I cited an even better 
technique is described.
>> "Try to desist from the pathetic attempts at insults"
>>      
> I don't know what that even means, but sorry about the oversampling comments.
>   It did seem you did not know what that was or at least its advantages when it comes to simplifying things.
> Ditto on the Phase and Freq differences comment, which I fear still may be the case.
>
>    
Sure if one has a sufficiently high oversampling factor crude 
approximations may work reasonably well.
However one ought to strive to do better particularly when its easy to 
do so and requires no additional hardware.
One doesn't always have the luxury of an extremely high oversampling ratio.
A better filter than a single pole RC filter may also be required to 
take full advantage.
>> "You seem to be unaware of just how easy it is to create a dataset for which any given algorithm will fail catastrophically."
>>      
> True, I'm unaware of ANYTHING Normal that will make this fail.
> send me something, It'll be fun to try it, If you can break it, I can fix it.
>
> Lets start small, give me any two numbers, I'll give you the average
> Now  three, then four, how far do you want to go? I'm sure I can still give you the average value.
> Now for the big test, can I give you the average of several of the previous averages, This is not going to be a problem either.
> I can give you the average for any reasonable numbers that could be presented to the ADC in normal operation thru the restricted PLL BW and the BW filter.
> That IS about ALL there is to it.   It needs to give the AVERAGE Frequency  of the oversampled average frequencies.
> Then it needs to give the average of these averages. As long all samples are taken at the same rate, it works good.  (Average = Sum_nSamples / n )
> The oversampled has to be done fast enough and with the appropriate B/W limit so there is no dead time or aliasing or significant change happens during one sample period.
>   The other H/W takes care of that.  Oversampling>>  TC  filter.
>
> To understand why that is all that is needed, one need to only look at what basic Allan deviation is.
> Allan deviation is the Average of the neighboring frequency differences that have been averaged for a given length of time. That  length of time is called tau.
>    
Actually you should be using fractional frequency differences.
> That is needed then is to find (the Average_Freq over a tau period) -  (the Average_Freq over the next  same length time period).
> Do some squaring of the differences and some more averaging and scaling and some sq root and out pops an ADEV answer.
> The important point is that it ALL just starts with the Average Frequency  over a period of time called tau.
> If you can get accurate average freq over tau0  time then any standard Allan calculation S/W can turn it into ADEV and  at any tau.
>
>
> ws
>
> *******************
>
>    
Bruce
> WarrenS wrote:
>    
>> Bruce posted
>>
>>      
>>> The RC filter doesn't accurately integrate the frequency difference
>>> over time interval Tau0.
>>>        
>> For you to even state that means you still have NO idea what I'm
>> doing, It is getting sort of sad.
>>
>> Correct the RC filter is not an integrator, it is used for the
>> combination Bandwidth and anti-aliasing filter.
>> It is the oversampling average that does the integration.
>>      
> How? Rectangular integration isnt particularly accurate or efficient,
> better techniques exist.
>    
>> What would explain a lot is, if you do not know what oversampling even
>> is?
>>      
> Try to desist from the pathetic attempts at insults as they merely
> distract from the real questions about the signal processing techniques
> adopted.
>    
>> You need to get yourself a refresher course on the advantages of
>> oversampling to do integration, brick wall filtering, anti-aliasing and
>> why a single RC works just fine for integration when oversampling is
>> used and why you don't need anything but simple averaging of sum n
>> samples /n when oversampling is used.
>> Don't need all the unnecessary fir filter crap, just oversample.
>>      
>
> Not so, as anyone with a comprehensive understanding of the subject will
> attest.
> Seat of the pants methods produce misleading predictions when noise isnt
> statistically stationary.
>
>    
>> If you have spare bandwidth like I have, then it sure saves a lot of
>> stuff. Ever hear of "KISS'.
>>      
> Most are aware of the principle but over simplification leads to
> erroneous results.
>
>    
>> You need to ask someone to explain that to you some day, along with
>> "close enough"
>> Hint, the simple Tester BB only takes ONE IC and it is just a single
>> op Amp.
>>      
> That performance metric is irrelevant if it doesnt measure the desired
> quantity for all cases of interest.
> NB the case spectrum will vary from one user to another so the
> limitations of the technique need to be well known.
> These limitations will include limits on the phase noise spectra of the
> devices being compared.
>    
>> And Although John's Software makes it all much more user friendly and
>> makes user mistakes less likely to occur, It is not needed. Works just
>> fine with no special S/W code or filter S/W.
>>      
>    
>> AND it still does integration just fine. (Send me that data file if
>> you want to see how it works).
>>
>>      
> You seem to be unaware of just how easy it is to create a dataset for
> which any given algorithm will fail catastrophically.
>
>
>    
>> ws
>>
>> **************
>> Bruce last posted:
>>
>> John Miles wrote:
>>      
>>>>> The integration secret  (which is no secret to anyone but
>>>>>
>>>>>            
>>>> Bruce)  is to analog filter, Oversample, then average the
>>>> Frequency data at a rate much faster than the tau0 data rate.
>>>>
>>>>          
>>>>>            
>>>> Which again is misleading as you specify neither the averaging method
>>>> nor the analog filter.
>>>>
>>>>          
>>> I can't speak for the analog side as I never saw a schematic of the
>>> PLL, but
>>> it may be worthwhile to point out that the averaging code in question
>>> is in
>>> SOURCE_DI154_proc() in ti.cpp, which is installed with
>>> http://www.ke5fx.com/gpib/setup.exe .  This is my code, not
>>> Warren's.  It
>>> does a simple boxcar average on phase-difference data, the same as my
>>> TSC
>>> 5120 acquisition routine does.  Previous tests indicated that simple
>>> averaging yields a good match to most ADEV graphs on TSC's LCD
>>> display, so I
>>> used it for the PLL DAQ code as well.
>>>
>>> I also tried a Kaiser-synthesized FIR kernel for decimating the
>>> incoming TSC
>>> data, but found that its conformance against the TSC's display was worse
>>> than what I saw with the simple average.  More work needs to be done
>>> here.
>>>
>>>
>>>        
>>>> When will you understand that phase differences and differences of
>>>> average frequency (unit weight to frequency measures over the sampling
>>>> interval zero weight outside) are equivalent.
>>>>
>>>>          
>>> One subtlety is the question of whether to average (or otherwise
>>> filter) the
>>> DAQ voltage readings immediately after they're acquired and linearly
>>> scaled
>>> to frequency-difference values, versus after conversion of the
>>> frequency-difference values to phase differences.  I found that the best
>>> agreement with the TSC plots was obtained by doing the latter:
>>>
>>>   val = (read and scale the DAQ voltage)
>>>
>>>   // val is now a frequency difference
>>>   // averaging val here yields somewhat higher
>>>   // sigma(tau) values in the first few bins
>>>   // after tau0
>>>
>>>   val = last_phase + (val / DI154_RATE_HZ);
>>>   last_phase = val;
>>>
>>>
>>>        
>> This appears to use a rectangular approximation to the required integral.
>> A trapezoidal or even Simpson's rule integration technique should be
>> more accurate for a given sample rate.
>> One could even try a higher order polynomial fit to the sample points,
>> however this isnt the optimum technique to use.
>>
>> If one uses WKS interpolation to reconstruct the continuous frequency vs
>> time function and integrates the result for a finite time interval
>> (Tau0) then one ends up with a digital filter with infinite number of
>> terms.
>> Since an infinite number of samples is required to do this using a
>> suitable window function is probably advisable.
>>
>> The paper (below) illustrates how AVAR etc can be calculated from the
>> sampled frequency difference data using DFT techniques:
>>
>> http://hal.archives-ouvertes.fr/docs/00/37/63/05/PDF/alaa_p1_v4a.pdf
>>
>>      
>>>   // val is now a phase difference
>>>   // averaging val here matches the TSC better
>>>
>>> The difference is not huge but it's readily noticeable.
>>>
>>> This is subtly disturbing because the RC filter before the DAQ *does*
>>> integrate the frequency-difference data directly.  If it's correct to
>>> band-limit the frequency-to-voltage data in the last analog stage of the
>>> pipeline, it should be correct to do it in the first digital stage, I'd
>>> think.
>>>
>>>
>>>        
>> The RC filter doesnt accurately integrate the frequency difference over
>> time interval Tau0.
>>      
>>> Further complicating matters is the question of whether the TSC 5120A's
>>> filtering process is really all that 'correct,' itself.  When they
>>> downsample their own data by a large fraction, e.g. when you select
>>> tau0=100
>>> msec / NEQ BW = 5 Hz, there is often a slight droop near tau0 that
>>> does not
>>> correspond to anything visible at higher rates.  To some extent we
>>> may be
>>> attempting to match someone else's bug.
>>>
>>>        
>> This is the result of the choice of the low pass filter bandwidth made
>> by the designers.
>> The filter bandwidth increases as Tau0 decreases.
>> The traditional analyses of the dependence of AVAR on bandwidth of this
>> filter assume a brickwall filter.
>>
>>      
>>> At any rate I've run out of time/inclination to pursue it, at least
>>> for now.
>>> The SOURCE_DI154_proc() routine in TI.CPP is open for inspection and
>>> modification by any interested parties, lines 6753-7045 in the current
>>> build. :)  Warren has his hardware back now, and would presumably be
>>> able to
>>> try any modifications.
>>>
>>> -- john, KE5FX
>>>
>>>        
>>      
> _______________________________________________
> time-nuts mailing list -- time-nuts at febo.com
> To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
> and follow the instructions there.
>
>    





More information about the time-nuts mailing list