[time-nuts] Phase modulation detection/NIST plan

paul paulswedb at gmail.com
Wed Jul 11 19:48:32 UTC 2012


David
Read your comments and have been traveling. So finally a chance to email.

I read the document also and walked away with what I shared.
In your reading would you believe the following.
Its an absolute phase and that when it switches to 0 there is 1 
transition at the beginning of the second to 180 degrees staying that 
way to the next bit or flipping again to 0 degrees if its a 1 at the 1 
sec tic???
Is there a way to sense from the document that there is a bias towards 0 
lets say.
I could not figure that out.

What I have seen on the scope is that I believe a 0 could be multiple 
flips during the 0 bit and thats perplexing.
Regards
Paul.


On 7/9/2012 1:53 AM, David I. Emery wrote:
> On Sun, Jul 08, 2012 at 09:02:53PM -0400, Bob Camp wrote:
>> Hi
>>
>> The gotcha is that they may change the sync word based on test data.
>> They may also tweak other vague points in the spec based on the troubles
>> they run into in their tests or with their silicon.
> 	I finally read the wwvb.pdf paper (yes, do so before opening
> mouth)...
>
> 	I think I read the "Binary Phase Shift Keying Modulation"
> paragraph on page 10 to indicate they are using ABSOLUTE, not
> differential BPSK.
>
> 	They refer to the "baseband waveforms s0(t) and s1(t)".   To me
> this is the absolute I vector... and this clearly says  that a 0 is
> always upward (or by convention in phase), and a one is always downward
> (180 out)...   They clearly say the phase shift is 180 degrees...
>
> 	I would think this clearly could be phrased better...
>
> 	It appears the data format they propose is quite well defined in
> the paper, though they clearly indicate that a proposed extension is
> changing the barker code sync word for frames every so often so as to
> indicate a different frame type that might contain highly entropic (eg
> volatile and unpredictable) information of  undefined character
> including a possible mechanism for sending arbitrary and completely
> apriori unpredictable bitstreams, though doubtless constrained by the
> hamming codes used for FEC/error detection and the barker code sync
> word.
>
> 	On a quick read it appears the complete 60 second time frame
> format is defined unambiguously.   There are somewhat unpredictable DST
> bits and leap second bits in there... but in practice those change VERY
> infrequently from 60 second frame to frame or even from week to week
> or year to year. (Yes Congress likes to muck with DST every decade or
> so...).
>
> 	I am still reading more carefully, but I think this means that
> the entire phase and amplitude sequence of the signal is defined for the
> current initial version if you know the time of day and date and the
> current leap second and DST settings (which change VERY infrequently).
> And I *THINK* I understand this means the absolute phase sequence
> relative to the 60 KHz going into the modulator at the transmitter....
>
> 	Thus the initial signal phase modulation could be removed by
> some comparatively simple itty bitty microp software driving a balanced
> modulator  BUT future signal extensions might not have that property.
>
> 	As for acquiring bit sync with the signal, both the amplitude
> and phase information should allow a micro to do this easily and
> relatively quickly if the I vector were provided to the micro somehow.
> This would presumably be possible by either sampling the 60 KHz directly
> with an A/D (at 240 Ksample/sec) or by using an external balanced mixer
> driven by local synthesized 60 KHz.   Even just an envelope detector
> would work with strong signals because of the AM  component, and this
> might be enough to acquire adequate bit sync for some purposes.
>
> 	Software PLLs at 1 second rate are duck soup for even a SLOW
> micro... and frequency errors are tiny so tracking can be tight. And
> acquisition for these is also very fast given reasonable SNR. Only takes
> forever if SNR is so low it takes that many seconds correlation to see a
> reliable tick.
>
> 	I admit as I think about this that if one synthesized the clock
> for a itty bitty simple micro from say a local DUT 10 MHz whose phase
> relative to WWVB one is monitoring one could do much of the entire job
> by using programmable timers on the micro and its internal A/D.   This
> includes phase error versus WWVB output and of course TOD output.
>
> 	One would almost certainly want to either use external balanced
> mixers (FET switches ?) and produce an analog I and Q (low pass
> filtered) for processing by a really slow micro or use a fast enough one
> to take a stream of actual real 60 KHz input samples at 240 KHz and
> compute filtered I and Q)  (and LP filter/decimate it) (yes, with
> accurate A/D clocking from suitable microp output pin interval timers
> you might well be able to subsample by a lot and not actually ever deal
> with even any close to a  240 KHz sample stream with the micro).
>
> 	This would of course allow computation of the vector positions
> of the WWVB signal modulation in I and Q space relative to the 10 MHz
> clock from the DUT.   And from that one should be able to compute
> the various moments of 10 MHz DUT clock drift and do a decent job
> of compensating for it (better and better as the DUT clock gets more
> stable/predictable) and ride out fairly long fades and outages without
> losing a pretty  good idea of the expected WWVB phase.
>
> 	Presumably most standards whose phase one is tracking with such
> setups are very stable, thus the holdover should be considerable
> if one uses a good error and drift estimate to adjust ones local
> idea of WWVB phase relative to local clock derived from the standard
> to compensate.   And guess what, determining a local error and drift
> estimate is precisely what such a system is doing...
>
> 	And yes one could also drive a balanced modulator with the micro
> to generate a de-psked signal for legacy receivers in the museum, but
> in fact the micro would already know all that those things typically
> provided by legacy equipment (phase of local DUT standard versus WWVB
> and time of day and possibly SNR and or signal strength).
>
> 	It has occurred to me that it might be necessary to either
> squelch (eg turn off) the de-psked output to the legacy receivers on
> signal fades or perhaps BPSK modulate it with a several times the 1
> second bit rate pseudo random bit stream to ensure that the legacy
> receivers never saw anything they thought was good lock until the time
> of day lock allowed determination of absolute WWVB signal phase reliably
> (and the new code with FEC seems to make this VERY reliable).
>
> 	Enough drunken ramblings late at night...
>
>
>
>> Bob
>>
>> On Jul 8, 2012, at 8:07 PM, Magnus Danielson wrote:
>>
>>> On 07/09/2012 12:46 AM, paul wrote:
>>>> Peter indeed there could be
>>>> But it should not need to be decoded to undo the psk.
>>>> Plus documentation lacks some of the details I think to actually do it.
>>>> But that would be a significant project since the formats not been
>>>> settled completely yet.
>>> I have looked at the PTTI 2011 paper (wwvb.pdf) and much of a format is being shown. Has anyone established the 14 bit sync-word and verified the format? It seems that aligning up with the normal AM broadcast should be possible.
>>>
>>> Can someone record it as it has been reduced to say 2 kHz and analyze the produced audio file? Recoding with 48 kHz sampling rate should allow almost trivial 2 kHz I-Q demodulation to illustrate phase swaps.
>>>
>>> Cheers,
>>> Magnus
>>>
>>> _______________________________________________
>>> time-nuts mailing list -- time-nuts at febo.com
>>> To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
>>> and follow the instructions there.
>>
>> _______________________________________________
>> time-nuts mailing list -- time-nuts at febo.com
>> To unsubscribe, go to https://www.febo.com/cgi-bin/mailman/listinfo/time-nuts
>> and follow the instructions there.





More information about the time-nuts mailing list