I'm simulating the jitter performance of a driven ckt (inverter chain like, e.g.). I'm confused by the simulation setup and like to ask a couple of questions regarding the setup parameters.
First I set sweeptype=relative and relharmnum=1 and run the simulation. Then I change relharmnum=0 and compare the output results from 2 simulatoins. It looks like that the difference is very small (smaller than 0.01fs). Per the Q&A from Cadence's support website, if we set relharmnum=1,we're looking at the frequency
from f0 + 1Xfstart to f0 + 1Xfstop (f0 is the fundamental frequency and fstart/fstop the sweep frequency range). 1 is the relharmnum.
I think this is the frequency range exactly of my interest. But I wonder why setting relharmnum=0 didn't really disable the sweep frequency range setting (fstart/fstop), if we just follow the equation as above. The small difference of simulation results show it.
I already explored the previous discussion regarding how to set fstop to simulate a driven ckt. The suggestion is to set it to 1/2 of f0 (fundamental tone frequency). It's consistent with Nyquist sampling theorem. But I like to know if it's in conflict with up/down conversion of noise whose frequency can be N times of f0 (N set by the maxsideband).
Thanks for clearing my questions and any comments welcomed!