I think he is just taking Meridian at their word when they say:Thanks. Good read. He is confusing what MQA means by "time smearing." It is not the same as "time coherent speakers."
This is pointless if you then go and replay MQA over speakers which are ridiculously "time smearing" through not being time coherent. I agree with him....timing sensitivity evolved to help us thrive in forest environments. As sounds reach us, microseconds apart, our brains build a 3D sonic ‘picture’. Similarly, at a live performance, we're able to position individual instruments. It’s why live music feels so powerful and a recording seems so flat in comparison. MQA captures this timing information from the master, so it feels like you’re at the performance.
I think he is just taking Meridian at their word when they say:
"timing sensitivity evolved to help us thrive in forest environments. As sounds reach us, microseconds apart, our brains build a 3D sonic ‘picture’. Similarly, at a live performance, we're able to position individual instruments. It’s why live music feels so powerful and a recording seems so flat in comparison. MQA captures this timing information from the master, so it feels like you’re at the performance."
It need not be compression - some recordings are not compressed. But perception of "powerful" may be reduced if transient edges are staggered in their timing.Powerful doesn't suffer compression well; which timing can't restore ...
I have spoken to Kevin Voeks (Dr. Product development at Harman/Revel) why they don't they compensate for time differential. He pointed to Dr. Vanderkooy work and conversations with him and that showed the differential simply is not audible with music in listening rooms (it can be to some extent with headphones or in anechoic chamber). He said by making them time aligned they would have to use lower order crossovers which would create other problems that were definitely audible.Has Toole written anything about audibility of time coherence in loudspeakers? Or any other legit publication would be fine.
Someone straighten me out if I'm not remembering correctly, weren't the Revel Salon Ultima (or Ultima 2) the speakers that did very well in blind tests, yet their time domain response is nothing to write home about.
Not at all. It is true that two speakers placed at precise locations provide mode cancellation in that dimension. But outside of that, being able to move a sub anywhere with DSP provides a superior solution. To wit, when we used to demonstrate the KEF LS50, we added a Revel sub with DSP correction and put it in a hidden corner. This was a small room -- much smaller than hotel rooms -- and the combined sound was superb. Excellent tight bass and lots of it, seemingly coming from the bookshelf LS50s.Have we have concluded that deep bass coming from two fixed but uncontrolled points below the midrange and high-frequency drivers is actually easier to integrate than a single sub, even without room correction?
I have spoken to Kevin Voeks (Dr. Product development at Harman/Revel) why they don't they compensate for time differential. He pointed to Dr. Vanderkooy work and conversations with him and that showed the differential simply is not audible with music in listening rooms (it can be to some extent with headphones or in anechoic chamber). He said by making them time aligned they would have to use lower order crossovers which would create other problems that were definitely audible.
I have not spoken to Dr. Toole about it but here are some quotes from his book that says the same thing:
"This [results of blind listening tests] suggests that we like flat amplitude
spectra and we don’t like resonances, but we tolerate general phase shift, meaning
that waveform fidelity [both amplitude and phase] is not a requirement.
[...]
Loudspeaker transducers, woofers, midranges, and tweeters behave as
minimum-phase devices within their operating frequency ranges (i.e., the phase
response is calculable from the amplitude response). This means that if the
frequency response is smooth, so is the phase response, and as a result, the
impulse response is unblemished by ringing. When multiple transducers are
combined into a system, the correspondence between amplitude and phase is
modified in the crossover frequency ranges because the transducers are at different
points in space. There are propagation path-length differences to different
measuring/listening points. Delays are non-minimum-phase phenomena. In the
crossover regions, where multiple transducers are radiating, the outputs can
combine in many different ways depending on the orientation of the microphone
or listener to the loudspeaker.
The result is that if one chooses to design a loudspeaker system that
has linear phase, there will be only a very limited range of positions in space
over which it will apply. This constraint can be accommodated for the
direct sound from a loudspeaker, but even a single reflection destroys the relationship.
As has been seen throughout Part One of this book, in all circumstances,
from concert halls to sound reproduction in homes, listeners at best
like or at worst are not deterred by normal refl ections in small rooms. Therefore,
it seems that (1) because of reflections in the recording environment there
is little possibility of phase integrity in the recorded signal, (2) there are challenges
in designing loudspeakers that can deliver a signal with phase integrity
over a large angular range, and (3) there is no hope of it reaching a listener in
a normally reflective room. All is not lost, though, because two ears and a brain
seem not to care.
Many investigators over many years have attempted to determine whether
phase shift mattered to sound quality (e.g., Greenfield and Hawksford, 1990;
Hansen and Madsen, 1974a, 1974b; Lipshitz et al., 1982; Van Keulen, 1991).
In every case, it has been shown that if it is audible, it is a subtle effect,
most easily heard through headphones or in an anechoic chamber, using carefully
chosen or contrived signals. There is quite general agreement that with
music reproduced through loudspeakers in normally reflective rooms, phase
shift is substantially or completely inaudible. When it has been audible as a
difference, when it is switched in and out, it is not clear that listeners had a
preference.
Others looked at the audibility of group delay (Bilsen and Kievits, 1989; Deer
et al., 1985; Flanagan et al., 2005; Krauss, 1990) and found that the detection
threshold is in the range 1.6 to 2 ms, and more in reflective spaces.
Lipshitz et al. (1982) conclude, “All of the effects described can reasonably
be classified as subtle. We are not, in our present state of knowledge, advocating
that phase linear transducers are a requirement for high-quality sound reproduction.”
Greenfield and Hawksford (1990) observe that phase effects in rooms are
“very subtle effects indeed,” and seem mostly to be spatial rather than timbral.
As to whether phase corrections are needed, without a phase correct recording
process, any listener opinions are of personal preference, not the recognition of
“accurate” reproduction.
In the design of loudspeaker systems, knowing the phase behavior of transducers
is critical to the successful merging of acoustical outputs from multiple
drivers in the crossover regions. Beyond that, it appears to be unimportant.
So I say that the research is pretty conclusive and there is really no "problem" here to fix. It is the case of chasing measurements that please the eye, instead of the ears. The impulse measurements are devoid of room reflections and only use one microphone instead of two ears and a brain. Big difference in acoustics.
Not so. The paper by Vanderkooy sets out to correct everyone that phase delay *is* audible. There are listening tests, etc. in the paper that demonstrate it. It is only the final conclusions section where he says that the conditions under which phase was audible is hard to replicate in real rooms.As in much of audio, a plausible set of words and selective experiments can be put together to justify any convenient stance or belief.
I have spoken to Kevin Voeks (Dr. Product development at Harman/Revel) why they don't they compensate for time differential. He pointed to Dr. Vanderkooy work and conversations with him and that showed the differential simply is not audible with music in listening rooms (it can be to some extent with headphones or in anechoic chamber). He said by making them time aligned they would have to use lower order crossovers which would create other problems that were definitely audible.
I have not spoken to Dr. Toole about it but here are some quotes from his book that says the same thing:
"This [results of blind listening tests] suggests that we like flat amplitude
spectra and we don’t like resonances, but we tolerate general phase shift, meaning
that waveform fidelity [both amplitude and phase] is not a requirement.
[...]
Loudspeaker transducers, woofers, midranges, and tweeters behave as
minimum-phase devices within their operating frequency ranges (i.e., the phase
response is calculable from the amplitude response). This means that if the
frequency response is smooth, so is the phase response, and as a result, the
impulse response is unblemished by ringing. When multiple transducers are
combined into a system, the correspondence between amplitude and phase is
modified in the crossover frequency ranges because the transducers are at different
points in space. There are propagation path-length differences to different
measuring/listening points. Delays are non-minimum-phase phenomena. In the
crossover regions, where multiple transducers are radiating, the outputs can
combine in many different ways depending on the orientation of the microphone
or listener to the loudspeaker.
The result is that if one chooses to design a loudspeaker system that
has linear phase, there will be only a very limited range of positions in space
over which it will apply. This constraint can be accommodated for the
direct sound from a loudspeaker, but even a single reflection destroys the relationship.
As has been seen throughout Part One of this book, in all circumstances,
from concert halls to sound reproduction in homes, listeners at best
like or at worst are not deterred by normal refl ections in small rooms. Therefore,
it seems that (1) because of reflections in the recording environment there
is little possibility of phase integrity in the recorded signal, (2) there are challenges
in designing loudspeakers that can deliver a signal with phase integrity
over a large angular range, and (3) there is no hope of it reaching a listener in
a normally reflective room. All is not lost, though, because two ears and a brain
seem not to care.
Many investigators over many years have attempted to determine whether
phase shift mattered to sound quality (e.g., Greenfield and Hawksford, 1990;
Hansen and Madsen, 1974a, 1974b; Lipshitz et al., 1982; Van Keulen, 1991).
In every case, it has been shown that if it is audible, it is a subtle effect,
most easily heard through headphones or in an anechoic chamber, using carefully
chosen or contrived signals. There is quite general agreement that with
music reproduced through loudspeakers in normally reflective rooms, phase
shift is substantially or completely inaudible. When it has been audible as a
difference, when it is switched in and out, it is not clear that listeners had a
preference.
Others looked at the audibility of group delay (Bilsen and Kievits, 1989; Deer
et al., 1985; Flanagan et al., 2005; Krauss, 1990) and found that the detection
threshold is in the range 1.6 to 2 ms, and more in reflective spaces.
Lipshitz et al. (1982) conclude, “All of the effects described can reasonably
be classified as subtle. We are not, in our present state of knowledge, advocating
that phase linear transducers are a requirement for high-quality sound reproduction.”
Greenfield and Hawksford (1990) observe that phase effects in rooms are
“very subtle effects indeed,” and seem mostly to be spatial rather than timbral.
As to whether phase corrections are needed, without a phase correct recording
process, any listener opinions are of personal preference, not the recognition of
“accurate” reproduction.
In the design of loudspeaker systems, knowing the phase behavior of transducers
is critical to the successful merging of acoustical outputs from multiple
drivers in the crossover regions. Beyond that, it appears to be unimportant.
So I say that the research is pretty conclusive and there is really no "problem" here to fix. It is the case of chasing measurements that please the eye, instead of the ears. The impulse measurements are devoid of room reflections and only use one microphone instead of two ears and a brain. Big difference in acoustics.
Not so. The paper by Vanderkooy sets out to correct everyone that phase delay *is* audible. There are listening tests, etc. in the paper that demonstrate it. It is only the final conclusions section where he says that the conditions under which phase was audible is hard to replicate in real rooms.
View attachment 2370
View attachment 2371
View attachment 2372
The paper was published in the peer reviewed journal of AES where both Dr. Lipshitz and Vanderkooy are both luminaries. 99% of the paper sets out to *prove that phase delay is audible.* It is just the case that with music, loudspeakers and rooms, it becomes nearly inaudible. And rightly they point it out:
View attachment 2373
It could not be more balanced than this.
snip
At the end of the day no one knows whether listening tests actually work anyway: consciously listening for differences may make us less sensitive to hearing them - how can you prove whether this is true or not?
But this is a circular argument, because the categorisation of "very small" is derived from the smallest differences a human can pick out repeatedly and reliably while consciously listening for them.Well listening tests do allow one to pick out some very small differences. Consistently, repeatedly, and reliably. So yes they demonstrably work.
Listening tests in peer reviewed papers have all the details one would want. Not having read them is no excuse to say they may not be so!Well, I don't know the exact details of these listening tests that purport to show that phase and timing doesn't matter (and I'm not sure I draw the same conclusions as you, Amir, on that report you show), but a question I would ask is: do the experiments compare 'a certain type of phase shift' against 'no phase shift', or is it 'a certain type of phase shift' against 'a different type of phase shift'? Like comparing a scene through rippled glass versus clear glass, or rippled glass versus some other rippled glass. I may not be very good at distinguishing between different types of rippled glass, but I might recognise the clear glass a lot better. I suspect that many of these experiments are comparing two types of rippled glass, having been carried out before meaningful correction was possible, and may have been in mono, with small speakers with inadequate bass, or full range drivers and their ilk.
If you are not conscious of it, it doesn't enter into your enjoyment of what you are listening. If they are subconscious, you could never describe them as fidelity differences for example.We are dealing with human consciousness. It is entirely possible that humans register even smaller differences, or different categories of distortion, say, when not consciously listening for them - but we shall never know, because conducting the experiment affects the observation.