dallasjustice
Major Contributor
I'll try to put words in Mitch's mouth as well as my own. It's been my experience that time domain made closer to linear phase does improve the soundstage clarity. The spatial relationships between different instruments or vocals become better defined.
But how can soundstage clarity be studied? Are there any studies which have really investigated the above described phenomenon by comparing digitally corrected time domain versus normal passive speakers?
But how can soundstage clarity be studied? Are there any studies which have really investigated the above described phenomenon by comparing digitally corrected time domain versus normal passive speakers?
I have spoken to Kevin Voeks (Dr. Product development at Harman/Revel) why they don't they compensate for time differential. He pointed to Dr. Vanderkooy work and conversations with him and that showed the differential simply is not audible with music in listening rooms (it can be to some extent with headphones or in anechoic chamber). He said by making them time aligned they would have to use lower order crossovers which would create other problems that were definitely audible.
I have not spoken to Dr. Toole about it but here are some quotes from his book that says the same thing:
"This [results of blind listening tests] suggests that we like flat amplitude
spectra and we don’t like resonances, but we tolerate general phase shift, meaning
that waveform fidelity [both amplitude and phase] is not a requirement.
[...]
Loudspeaker transducers, woofers, midranges, and tweeters behave as
minimum-phase devices within their operating frequency ranges (i.e., the phase
response is calculable from the amplitude response). This means that if the
frequency response is smooth, so is the phase response, and as a result, the
impulse response is unblemished by ringing. When multiple transducers are
combined into a system, the correspondence between amplitude and phase is
modified in the crossover frequency ranges because the transducers are at different
points in space. There are propagation path-length differences to different
measuring/listening points. Delays are non-minimum-phase phenomena. In the
crossover regions, where multiple transducers are radiating, the outputs can
combine in many different ways depending on the orientation of the microphone
or listener to the loudspeaker.
The result is that if one chooses to design a loudspeaker system that
has linear phase, there will be only a very limited range of positions in space
over which it will apply. This constraint can be accommodated for the
direct sound from a loudspeaker, but even a single reflection destroys the relationship.
As has been seen throughout Part One of this book, in all circumstances,
from concert halls to sound reproduction in homes, listeners at best
like or at worst are not deterred by normal refl ections in small rooms. Therefore,
it seems that (1) because of reflections in the recording environment there
is little possibility of phase integrity in the recorded signal, (2) there are challenges
in designing loudspeakers that can deliver a signal with phase integrity
over a large angular range, and (3) there is no hope of it reaching a listener in
a normally reflective room. All is not lost, though, because two ears and a brain
seem not to care.
Many investigators over many years have attempted to determine whether
phase shift mattered to sound quality (e.g., Greenfield and Hawksford, 1990;
Hansen and Madsen, 1974a, 1974b; Lipshitz et al., 1982; Van Keulen, 1991).
In every case, it has been shown that if it is audible, it is a subtle effect,
most easily heard through headphones or in an anechoic chamber, using carefully
chosen or contrived signals. There is quite general agreement that with
music reproduced through loudspeakers in normally reflective rooms, phase
shift is substantially or completely inaudible. When it has been audible as a
difference, when it is switched in and out, it is not clear that listeners had a
preference.
Others looked at the audibility of group delay (Bilsen and Kievits, 1989; Deer
et al., 1985; Flanagan et al., 2005; Krauss, 1990) and found that the detection
threshold is in the range 1.6 to 2 ms, and more in reflective spaces.
Lipshitz et al. (1982) conclude, “All of the effects described can reasonably
be classified as subtle. We are not, in our present state of knowledge, advocating
that phase linear transducers are a requirement for high-quality sound reproduction.”
Greenfield and Hawksford (1990) observe that phase effects in rooms are
“very subtle effects indeed,” and seem mostly to be spatial rather than timbral.
As to whether phase corrections are needed, without a phase correct recording
process, any listener opinions are of personal preference, not the recognition of
“accurate” reproduction.
In the design of loudspeaker systems, knowing the phase behavior of transducers
is critical to the successful merging of acoustical outputs from multiple
drivers in the crossover regions. Beyond that, it appears to be unimportant.
So I say that the research is pretty conclusive and there is really no "problem" here to fix. It is the case of chasing measurements that please the eye, instead of the ears. The impulse measurements are devoid of room reflections and only use one microphone instead of two ears and a brain. Big difference in acoustics.