Apologies in advance for even more Rob Wattage,
does any of this make any sense,
Quote
,What can we say about timing errors?
If you would have asked me this a few years ago, I would have said uS accuracy was needed. Now I make no such assumption - there is perhaps no limit to how good the timing of transients need to be. So how can I substantiate that bold statement? Unlike noise shapers, it's rather difficult to put a number to timing accuracy. I guess I ought to state what I mean by transient timing accuracy. I do not mean - unlike the rest of the audio business - ringing performance; this is absolutely not what I am thinking about when I talk about the time domain or timing accuracy. Ringing uses an illegal signal from sampling theory POV as it is not bandwidth limited, so you would not actually get a perfect impulse from a perfect legal bandwidth limited ADC. So why worry about a signal you will never get? So it is actually pointless talking about it. What I mean is the accuracy of the timing of transients. Imagine a bandwidth limited analogue signal that is being sampled in the ADC - it is fully negative, goes positive and at some time crosses through zero. Let us say it is sampled at 44.1 kHz, so every 22,676 nS it's sampled. Let us imagine that the signal is sampled, and then crosses through zero at exactly 20,155 nS after sampling. Of course, when it gets sampled again at +22,676 nS it will now be a positive value. The question is, when the DAC reconstructs the sampled data - converting sampled data back to a continuous analogue signal - when will the signal cross thru zero? Theory is completely clear and undeniable - if we use an infinite oversampling FIR filter with a sinc response at 22,676 nS and a perfect DAC we will reconstruct the time it crosses thru zero absolutely perfectly at 20,155 nS. But with a finite non sinc function reconstruction filter, it will not cross thru at exactly 20,155 - maybe at 19,000 nS or 21,000 nS. And it is these differences in the timing of transients, are what I am talking about. Now in the past I would have said that getting it right to a uS was perhaps OK (timing errors can be as big as 100uS in conventional filters) - now I know that instead of worrying about uS we need to worry about getting it correct to nS's.
What is the evidence for that view? In designing Dave, I wanted to discover what I had done in the Hugo design (it was a happy accident) to give me the timing performance that I so enjoyed with it. By this I mean the ability to hear the stopping and starting of notes. After trying different things, I chased down this quality to the interpolation filters after the WTA filter. Now with Hugo, I used a 16FS WTA filter, followed by a linear interpolator and a two stage IIR filter filtering up to 2048 FS. Changing this to a 256 FS WTA filter followed by my usual 3 stage filtering gave a massive change in sound quality - at this point Dave was sounding impossibly rich and smooth and almost soft sounding. By changing it to 256FS WTA gave a substantial change in character - it was still smooth, but very fast and you could hear the starting and stopping much more easily. It went in character from soft and smooth to fast and sharp - when the occasion demanded.
Now replacing the WTA from 16FS (data every 1,417 nS) to 256 FS (data every 89 nS) is technically very small in the sense that transient accuracy using a WTA against an IIR filter at this speed is not a vast change in the time domain - it is a very subtle difference, but was nonetheless extremely audible. What it tells me is that very small - impossibly small - timing errors are very significant for the brain's ability to process the ear data."
Again, I admit I cannot comment on the technical aspect of what Rob Watts says and that is why I need your help to understand your claim that "There's no point providing timing accuracy at greater than 2x our max timing resolution ability and that currently stands at 0.05ms".