[time-nuts] Harmonics suppression in ring oscillators

Florian Teply usenet at teply.info
Sat Mar 21 15:49:29 EDT 2015


Am Thu, 19 Mar 2015 22:26:15 +0100
schrieb Attila Kinali <attila at kinali.ch>:

> Moin,
> 
> On Thu, 19 Mar 2015 21:50:03 +0100
> Florian Teply <usenet at teply.info> wrote:
> 
> > My guess would be slightly different: the fundamental mode of
> > oscillation could be considered the lowest energy state of all
> > oscillation modes. Assuming that the system wants to minimize
> > energy, this would be the mode to choose if it can't get into a
> > steady state. But here we are back in wild guess land, and I'm not
> > even sure that the concept of minimum energy states has any meaning
> > in this context.
> 
> That argumentation would work if all oscillation modes would have
> a single, global energy source with a rate(power) limit.
> An example for this are, e.g. lasers. There, the one mode with
> the highest gain will suck up all energy from the other modes.
> And the pump source replenishes the energy at a fixed, limted rate.
> But in a ring oscillator, the energy is provided for each element
> seperately and replenished as needed. Ie there is no competition
> for energy between the different modes (all switching edges walk
> around with the same speed and there are never two edges at the
> same gate).
> 
Umm, it might be not as clear a situation for CMOS technologies
compared to lasers, but still there are some analogies to that as well:

The Power supplies on chip are to some degree a limiting factor here.
Higher Frequencies mean switching more often, and with standard loads
in CMOS being capacitive, that translates to charging and discharging
capacitors more often (=more average current consumption), which
locally can often have a significant effect on voltage as CMOS gates
become slower when the supply voltage is reduced. Usually not to the
extent though that makes higher oscillation modes totally impossible...

> Hmm...  maybe the assumption that all edges walk around at the
> same speed is wrong?
> 
Well, in general this assumption is wrong, as by definition the gate
delay as used in the definition of the oscillation frequency of an RO
is anything but constant. At least for common CMOS technologies, there
are several pitfalls: The gate delay as measured by oscillation
frequency is the average of the propagation delays for both rising and
faling edges. I have yet to come across a single combination of CMOS
process and Digital Core Library that actually has balanced propagation
delays, that is, equal numbers for both rising and falling edge.
Commonly, falling edges (on the output) are somewhat faster than rising
edges as n-Channel MOS transistors tend to have higher saturation
currents than p-Channel types, and Core Cell Libraries usually are
riddled with design tradeoffs in that regard.
And as mentioned above, higher switching frequencies translate to lower
effective power supply voltage locally at the gate, also increasing
propagation delays. Then there is power dissipation, which is roughly
linear with frequency. Add in the dependency of transistor parameters
on temperature and thermal time constants on chip and you're getting
closer to the effects that play a role in the data-dependent jitter Hal
Murray mentioned in his answer. And there's even more that plays a
role. Think of global and local mismatch between devices, process
variation (which does not need to be uniform for both n- and p-Channel
devices, and only in very rare cases affects both types in the same way
so as to keep the drive balance between both). Of course, all of this
CAN be addressed analytically (that is, in circuit simulations before
manufacturing) and fed back for optimization, but in the semiconductor
industry this is not commonly done as it would need to take into
account way too many variables which are unknown to the ASIC Design
Engineer, and often can not be known a priori.

Best regards,
Florian


More information about the time-nuts mailing list