As for citations, I can't put my finger on any right now, it's what I was taught back in the 1960s. Yes, it would be THD, and it referred to audibility on speech and music, not test tones.cites?
also, was that THD, or what?
Not all distortions -- just the ones that are harmonics of the fundamental tone.SINAD is basically THD+N, which is a combination of all distortions and noise in the signal.
20dB separation is sufficient for decent stereo placement, and any electronics does a lot better, so that leaves loudspeakers and rooms. Any idea what the separation in-room actually is?Probably astonishing SINADs are inaudible, but based on my experience of e.g. Benchmark, the same kind of meticulous design attention also shows up in channel separation, which I have found to be audible. I'm not sure where and when the accepted threshold figure for separation was derived. Was it early in the stereo era? Maybe it should be revisited.
20dB separation is sufficient for decent stereo placement, and any electronics does a lot better, so that leaves loudspeakers and rooms. Any idea what the separation in-room actually is?
S
Not to nitpick, but 1950s - 1960s analog tape at normal operating level (185nWb) had distortion between about 0.6% and 1% distortion using the usual tape of the day, Scotch 111 or its equivalent. The level which produced 3% distortion was about 6-10dB above that, and this level is what was generally used for signal to noise specification, which was typically 55dB for half track 15ips. The CCIR equalization curve which is used in Europe suffered less low frequency distortion than machines in America which used the NAB curve. This is because the NAB curve specifies pre-emphasis on the low end by as much as 6dB which can cause more distortion with program material which has lots of low frequencies, like pipe organ.Bear in mind that analogue tape had 3% distortion, and around -60dB noise at best, with around 0.1% W&F so that puts into perspective all those vintage recordings so praised for their quality.
S
You're right about operating levels, except that most tape machines were used with VU meters, so peak levels were often if not always well above normal operating level as set on line up tone. A few machines, more broadcast than recording, were used with PPMs, so would have lower distortion, albeit at the expense of noise. Later, higher output tapes like Scotch 206 or Ampex Grand Master upped operating levels to 320nWb/m and even then, peak levels went higher as VU meters always under read.Not to nitpick, but 1950s - 1960s analog tape at normal operating level (185nWb) had distortion between about 0.6% and 1% distortion using the usual tape of the day, Scotch 111 or its equivalent. The level which produced 3% distortion was about 6-10dB above that, and this level is what was generally used for signal to noise specification, which was typically 55dB for half track 15ips. The CCIR equalization curve which is used in Europe suffered less low frequency distortion than machines in America which used the NAB curve. This is because the NAB curve specifies pre-emphasis on the low end by as much as 6dB which can cause more distortion with program material which has lots of low frequencies, like pipe organ
The companies which are selling high-end tape copies today use the CCIR curve, thankfully.
Back to our regularly scheduled thread.......