semi, so of course I couldn't let it be and went for reading on the jitter issue. The best description why all the professional gear syncs clock and why picoseconds make a difference (funny enough, it's not the timing of the main waveform recovery, it's the fact that he DAC starts to produce sidebands of a frequency when the world clock it's being fed is jittered, the numbers say that those can be heard (signal/noise ratio) in single nsecs for 16bit and in low psecs for 20bit conversions) I found in Harley's excellent book, appendix C. Before you cry bloody murder immediately, the assumption/fact there is that the world clock to the DAC _is recovered from the S/PDIF_ so no two independent clocks exist. Why syncing the clocks jitter should help rather than reclocking the signal in front of the DAC I could not find clearly. Me thinks that its seems that DAC components introduce so much clock jitter that it's better to feed the transport the resulting jitter and make sure it delivers the signal jittered in a way the DAC is jittering the world clock already?

So, at this point in time I think the jitter-proponents kind of made their case pretty well since this is logical and their numbers seem about on mark. However, your point that anything in silicon introduces picosecs jitter anyway is also very convienent and therefore all this reclocking in silicon and sending via cables to something else that takes it through silicon again maybe the correct architecture with the execution rendering it pointless again. I start thinking about wasting some money on the issue ;-)

real world's amazing

--- tony