Period jitter has a different definition that time interval error. Time interval error data contains the time differences between each waveform threshold crossing and the threshold crossings of a clock with an ideal period T. Period jitter data contains only the difference between the clock period (i.e. the time between the threshold crossings of subsequent rising or falling edges) and the ideal period. Period jitter can be viewed as a first "difference" function of the jitter since it relies on the difference in location of two consecutive threshold crossings. Time interval error provides an absolute difference. As such, the period jitter creates a weighted version of the jitter as some jitter amplitudes over frequency will be increased and some will be decreased. (In fact, period data may not be between adjacent periods. The JEDEC standard for measuring period jitter indicates that after measuring one period, one waits a random period of time before measuring the next period.) Period jitter can be derived from time interval error data. I hope this make sense to you.
Hence, it is not surprising that the measurement results for rms TIE and rms period error are different.
The choice of which one to use (rms of time interval error or rms of period error) is determined by your application and its requirement. If only the time between adjacent rising or falling edges is of utmost importance, perhaps the rms period jitter is of greatest value. Without more knowledge about your specific application, I'm not able to suggest which metric is best - sorry!