• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Interesting article by Mitchco.

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,825
Likes
243,131
Location
Seattle Area
To my ears, I agree with most of the research regarding "phase" coherence, I can’t hear a difference. But make no mistake, most of the research is about phase coherence, not time coherence. And that’s my point. To my ears, when the drivers are time aligned, that is when I hear the difference. Before anyone quotes old scripture, make sure it is about "time" coherence and I would be especially interested in the test conditions to see if it is even a valid test scenario.
First, welcome to the forum. :)

On your comment here, I am not sure what you are saying. Any time difference can be described in terms of amount of phase shift. The two are one in the same.

Here is Clark's experiment on this showing the interchangeable terminology: “Measuring Audible Effects of Time Delays in Listening Rooms,” Clark, David, AES Convention: 74 (October 1983)

upload_2016-7-24_19-56-42.png
 

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,825
Likes
243,131
Location
Seattle Area
Delphine Devallez presented this paper to the AES in Paris last month. It seeks to investigate how time and phase differences effect soundstage in cars. The dummy "Sandy" made several pink noise binaural recordings and then various time domain metrics were analyzed and a hypothesis is presented. I don't view this study as concrete proof that improving the IACC, ITD or other metrics using DSP means listeners will statisticslly prefer improved time domain. But it seems to be a worthy investigation and a good read. (The lead author is is gorgeous too!)
http://www.aes.org/tmpFiles/elib/20160724/18234.pdf
In my quick read of that paper, it is about phase differential between two channels of a stereo system. That is a different matter than what it means with respect to drivers in one speaker, i.e. mono, that we are discussing here. Small timing changes can indeed shift sound images in stereo reproduction.
 

Cosmik

Major Contributor
Joined
Apr 24, 2016
Messages
3,075
Likes
2,181
Location
UK
Any time difference can be described in terms of amount of phase shift. The two are one in the same.

Here is Clark's experiment on this showing the interchangeable terminology: “Measuring Audible Effects of Time Delays in Listening Rooms,” Clark, David, AES Convention: 74 (October 1983)

View attachment 2380
This is where the engineers begin to take over the asylum, and their view of everything in terms of the frequency domain - including the human ear and brain - colours everything they think and do.

Why didn't the author of that article use a delay line to prove his theories about delay? Phase shift is not the same as delay - for anything but a continuous wave.
 

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,825
Likes
243,131
Location
Seattle Area
This is where the engineers begin to take over the asylum, and their view of everything in terms of the frequency domain - including the human ear and brain - colours everything they think and do.

Why didn't the author of that article use a delay line to prove his theories about delay? Phase shift is not the same as delay - for anything but a continuous wave.
He didn't do that because that would be silly. Let me explain because this is a technical topic and its message seems to be completely lost.

A delay "line" will simply delay when you hear the music through your speaker. If you put in a 0.1 second delay in line with your speaker, then when you hit play, you hear the music 0.1 second later. Nothing else happens. A time delay is just what it says it is.

What we are discussing here is some frequencies taking longer to reach your ear than others. In the case of a speaker with a crossover, that can happen with one driver versus the other. The two drivers are then said to be out of phase with the impulse response showing that as demonstrated earlier.

Question becomes then whether we can hear this kind of phase or timing incoherence. If so, we have a huge and possibly impossible task in front of us in eliminating from recording venu, recording/mixing/mastering, and our home listening spaces. Given the huge task, it makes sense to research its audibility. On the surface this clearly sounds like something to be fixed. So there has been interest in the audio research community to see if it should.

The tests use what is call and all-pass filter as I quoted from Clark's report (Lipshitz and Vernderkooy use the same):

index.php


What is an all-pass filter? It is a filter where the amplitude and hence frequency response of what goes in, is the same as what comes out. What changes however is the phase response. The phase changes proportional with frequency. Here is a good picture that shows that:

doctors_Fig1.gif


The top graph shows how the frequency response is NOT changed. The bottom graph shows that the phase IS changed proportional to frequency of the input signal.

So we have clearly modified the input signal. Listening tests with controlled signals show that the ear+brain in headphone and anechoic (reflection free room) environment can indeed pick up this type of "phase distortion." Those differences however, essentially go away when we listen to music in a room that has reflections that cause extreme phase variations of their own.

As to the jab toward engineers, I will address that in the next post.
 

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,825
Likes
243,131
Location
Seattle Area
The first set of research data I post in the peer reviewed journal of Audio Engineering Society was authored (in part) by Professors Lipshitz and Vanderkooy. As their title indicates, they teach at the University (Waterloo). They are luminaries in audio industry especially when it comes to matters related to signal processing. These are the names we need to know and not throw them in some random pile of "engineers."

https://uwaterloo.ca/audio-research-group/people-profiles/john-vanderkooy

John Vanderkooy

BEng, PhD (McMaster) – Distinguished Professor Emeritus
john_vanderkooy.png
PHY 205 Ext: 32223 Email: [email protected]

Research Topics: Acoustics, Audio and Electroacoustics, Digital Signal Processing as it applies to audio; Diffraction and Radiation of sound by loudspeakers; Electro-acoustic Measurement Techniques.

Present Research Activities: Active acoustic absorbers, Noise measurements of Wind Turbines, the Acoustics of the trumpet. There is always an interest in measurement techniques

Measurement systems of various types normally work satisfactory when the system under test is truly linear, but nonlinearities can show up surprising differences. Maximum-length sequence (MLS), swept-tone and pulse systems are under study for the characterizations of transducers and acoustic spaces. As well as experiments, computer simulations are used which allow better control of the nonlinearities.

The diffraction of sound by loudspeaker cabinets has been studied with a view to producing simple predictive methods for the effect of a box. A computer algorithm has been applied to solve the Helmholtz sound equation for various shapes, and theoretical analyses show some asymptotic results that are in general agreement with experiment. Currently the diffraction from cabinet edges is under study. The geometrical theory of diffraction is easily modelled on a computer and is efficient enough that second-order diffraction effects can be included. Low-frequency asymptotic results are being used to correct near-field measurements.

Digital Signal Processing is done in our group with an express emphasis on acoustics and audio. Projects include band splitting for multi-way transducers, active noise cancellation, and measurement systems.

Some significant past papers
U.P. Svensson, R.I. Anderson, J. Vanderkooy. Analytical Time-Domain Model of Edge Diffraction, in Proceedings of NAM98, Nordic Acoustical Meeting, Stockholm, Sweden, September 7-9, 269-272 (1998).

Nonlinearities in Loudspeaker Ports, Presented at 104th AES Convention, Amsterdam, May 16-19, 1998. Preprint 4748, 32 pages.

U.P. Svensson, R.I. Anderson, J. Vanderkooy. A Time-Domain Model of Edge Diffraction Based on the Exact Biot-Tolstoy Solution, in the Technical Report of the Institute of Electronics, Information and Communication Engineers, EA97-39, Tokyo, Japan, September 26, 9-16 (1997).

J. Vanderkooy. Aspects of MLS Measuring Systems. J. Audio Eng. Soc. 42, 219-231. (1994)

S.P. Lipshitz, R.A. Wannamaker and J. Vanderkooy. Quantization and Dither: A Theoretical Survey. J. Audio Eng. Soc. 40, 355-375. (1992)

S.P. Lipshitz, J. Vanderkooy and R.A. Wannamaker. Minimally-audible noise shaping. J. Audio Eng. Soc. 39, 836-852. J. (1991)


Vanderkooy. 1991. A simple theory of cabinet edge diffraction. J. Audio Eng. Soc. 39, 923- 933.

J. Vanderkooy and S.P. Lipshitz. 1990. Uses and abuses of the energy-time curve. J. Audio Eng. Soc. 38, 819-836.

----

https://uwaterloo.ca/audio-research-group/people-profiles/stanley-p-lipshitz
Stanley P. Lipshitz

BSc (Natal), MSc (South Africa), PhD (Witwatersrand) – Professor Emeritus
stan_lipshitz.png
Lab: PHY 205 Ext: 33755 Email: [email protected]

Research Topics : Audio and electro-acoustics; Digital signal processing for audio; Diffraction and radiation of sound by loudspeakers; Electro-acoustic measurement techniques, Dither in image processing.

Present Research Activities: Both theoretical and experimental investigations are being made into the low-frequency radiation and diffraction of sound by loudspeakers. These are based on the use of the Helmholtz boundary integral formulation, and approximations thereto such as the Rubinowicz diffraction integral. It is hoped that this will lead to improved low-frequency measurement procedures in reverberant rooms.

The use of analogue as well as digital dithering techniques in digital audio, and digital signal processing in general, is being investigated both theoretically and experimentally. Dither can eliminate errors due to data quantization. Spectrally shaped dither signals and the design of dithered noise-shaping quantizers are specific current interests.

The application of digital signal processing techniques to audio promises to revolutionize the field in the next decade. We are pursuing investigations in such areas as room deconvolution, adaptive equalization, and surround- sound reproduction.

Some significant past papers
S.P. Lipshitz and J. Vanderkooy. 1983. A family of linear-phase crossover networks of high slope derived by time delay. J. Audio Eng. Soc. 31, 2-20.

J. Vanderkooy and S.P. Lipshitz. 1987. Dither in digital audio. J. Audio Eng. Soc. 35, 966-975.

J. Vanderkooy and S.P. Lipshitz. 1989. Digital dither: Signal processing with resolution far below the least significant bit. Proc. AES 7th International Conference "Audio in Digital Times", Toronto, Canada, May 14-17, 1989.

J. Vanderkooy and S.P. Lipshitz. 1990. Uses and abuses of the energy-time curve. J. Audio Eng. Soc. 38, 819-836.

S.P. Lipshitz, R.A. Wannamaker, J. Vanderkooy and J.N. Wright. 1991. Non-subtractive dither. Proc. 1991 IEEE ASSP Workshop on Applications of Signal Processing to Audio and Acoustics, New Paltz, NY, October 20-23, 1991.

S.P. Lipshitz, J. Vanderkooy and R.A. Wannamaker. 1991. Minimally audible noise shaping. J. Audio Eng. Soc. 39, 836-852.

S.P. Lipshitz, R.A. Wannamaker and J. Vanderkooy. 1992. Quantization and dither: A theoretical survey. J. Audio Eng. Soc. 40, 355-375.

----

Their work on dither was so significant that chances are really good that the products you use to listen to music has the results of their work in it! (TPDF dither)

Now, it doesn't mean you automatically agree with both of them. You can dispute what they say but you need to rise up to their level of knowledge. The papers are full of mathematics and deep level of analysis. So this is not a task anyone should take lightly. :)

So let's be done with "what do engineers know" type of comment. Do a bit of work on who these authors are, what their work has been, etc. If that is too hard to do, just ask and I will be happy to explain.
 

dallasjustice

Major Contributor
Joined
Feb 28, 2016
Messages
1,270
Likes
907
Location
Dallas, Texas
Amir,
I think it would be helpful to first define what I think Mitch seeks to investigate. I think what Mitch is saying is that there are audible effects to a more linear time domain response in a multi-way loudspeaker system as compared with the typical passive crossover loudspeaker. In my view, that's just a hypothesis. Nobody has claimed absolute proof.

Wouldn't it be best to directly investigate this question? The study you mention doesn't directly address the phenomenon Mitch is talking about. My hypothesis is that there ARE soundstage effects one should be able to hear when comparing a linear time domain multi-way speaker versus a minimum phase/passive multi-way speaker, frequency being equal. Of course, this could be tested but I have searched the AES library and haven't found any direct tests on this subject. I do think it's a worthy inquiry.


Michael.
 

RayDunzl

Grand Contributor
Central Scrutinizer
Joined
Mar 9, 2016
Messages
13,259
Likes
17,254
Location
Riverview FL
What is an all-pass filter? It is a filter where the amplitude and hence frequency response of what goes in, is the same as what comes out. What changes however is the phase response. The phase changes proportional with frequency. Here is a good picture that shows that:

doctors_Fig1.gif


What is a negative phase shift?

Is it putting some lateness into the highs? Or is it delaying the lows?

Or can it do whichever you prefer depending on how you twist the knob?
 

Cosmik

Major Contributor
Joined
Apr 24, 2016
Messages
3,075
Likes
2,181
Location
UK
The first set of research data I post in the peer reviewed journal of Audio Engineering Society was authored (in part) by Professors Lipshitz and Vanderkooy. As their title indicates, they teach at the University (Waterloo). They are luminaries in audio industry especially when it comes to matters related to signal processing. These are the names we need to know and not throw them in some random pile of "engineers."

https://uwaterloo.ca/audio-research-group/people-profiles/john-vanderkooy

John Vanderkooy

BEng, PhD (McMaster) – Distinguished Professor Emeritus
john_vanderkooy.png
PHY 205 Ext: 32223 Email: [email protected]

Research Topics: Acoustics, Audio and Electroacoustics, Digital Signal Processing as it applies to audio; Diffraction and Radiation of sound by loudspeakers; Electro-acoustic Measurement Techniques.

Present Research Activities: Active acoustic absorbers, Noise measurements of Wind Turbines, the Acoustics of the trumpet. There is always an interest in measurement techniques

Measurement systems of various types normally work satisfactory when the system under test is truly linear, but nonlinearities can show up surprising differences. Maximum-length sequence (MLS), swept-tone and pulse systems are under study for the characterizations of transducers and acoustic spaces. As well as experiments, computer simulations are used which allow better control of the nonlinearities.

The diffraction of sound by loudspeaker cabinets has been studied with a view to producing simple predictive methods for the effect of a box. A computer algorithm has been applied to solve the Helmholtz sound equation for various shapes, and theoretical analyses show some asymptotic results that are in general agreement with experiment. Currently the diffraction from cabinet edges is under study. The geometrical theory of diffraction is easily modelled on a computer and is efficient enough that second-order diffraction effects can be included. Low-frequency asymptotic results are being used to correct near-field measurements.

Digital Signal Processing is done in our group with an express emphasis on acoustics and audio. Projects include band splitting for multi-way transducers, active noise cancellation, and measurement systems.

Some significant past papers
U.P. Svensson, R.I. Anderson, J. Vanderkooy. Analytical Time-Domain Model of Edge Diffraction, in Proceedings of NAM98, Nordic Acoustical Meeting, Stockholm, Sweden, September 7-9, 269-272 (1998).

Nonlinearities in Loudspeaker Ports, Presented at 104th AES Convention, Amsterdam, May 16-19, 1998. Preprint 4748, 32 pages.

U.P. Svensson, R.I. Anderson, J. Vanderkooy. A Time-Domain Model of Edge Diffraction Based on the Exact Biot-Tolstoy Solution, in the Technical Report of the Institute of Electronics, Information and Communication Engineers, EA97-39, Tokyo, Japan, September 26, 9-16 (1997).

J. Vanderkooy. Aspects of MLS Measuring Systems. J. Audio Eng. Soc. 42, 219-231. (1994)

S.P. Lipshitz, R.A. Wannamaker and J. Vanderkooy. Quantization and Dither: A Theoretical Survey. J. Audio Eng. Soc. 40, 355-375. (1992)

S.P. Lipshitz, J. Vanderkooy and R.A. Wannamaker. Minimally-audible noise shaping. J. Audio Eng. Soc. 39, 836-852. J. (1991)


Vanderkooy. 1991. A simple theory of cabinet edge diffraction. J. Audio Eng. Soc. 39, 923- 933.

J. Vanderkooy and S.P. Lipshitz. 1990. Uses and abuses of the energy-time curve. J. Audio Eng. Soc. 38, 819-836.

----

https://uwaterloo.ca/audio-research-group/people-profiles/stanley-p-lipshitz
Stanley P. Lipshitz

BSc (Natal), MSc (South Africa), PhD (Witwatersrand) – Professor Emeritus
stan_lipshitz.png
Lab: PHY 205 Ext: 33755 Email: [email protected]

Research Topics : Audio and electro-acoustics; Digital signal processing for audio; Diffraction and radiation of sound by loudspeakers; Electro-acoustic measurement techniques, Dither in image processing.

Present Research Activities: Both theoretical and experimental investigations are being made into the low-frequency radiation and diffraction of sound by loudspeakers. These are based on the use of the Helmholtz boundary integral formulation, and approximations thereto such as the Rubinowicz diffraction integral. It is hoped that this will lead to improved low-frequency measurement procedures in reverberant rooms.

The use of analogue as well as digital dithering techniques in digital audio, and digital signal processing in general, is being investigated both theoretically and experimentally. Dither can eliminate errors due to data quantization. Spectrally shaped dither signals and the design of dithered noise-shaping quantizers are specific current interests.

The application of digital signal processing techniques to audio promises to revolutionize the field in the next decade. We are pursuing investigations in such areas as room deconvolution, adaptive equalization, and surround- sound reproduction.

Some significant past papers
S.P. Lipshitz and J. Vanderkooy. 1983. A family of linear-phase crossover networks of high slope derived by time delay. J. Audio Eng. Soc. 31, 2-20.

J. Vanderkooy and S.P. Lipshitz. 1987. Dither in digital audio. J. Audio Eng. Soc. 35, 966-975.

J. Vanderkooy and S.P. Lipshitz. 1989. Digital dither: Signal processing with resolution far below the least significant bit. Proc. AES 7th International Conference "Audio in Digital Times", Toronto, Canada, May 14-17, 1989.

J. Vanderkooy and S.P. Lipshitz. 1990. Uses and abuses of the energy-time curve. J. Audio Eng. Soc. 38, 819-836.

S.P. Lipshitz, R.A. Wannamaker, J. Vanderkooy and J.N. Wright. 1991. Non-subtractive dither. Proc. 1991 IEEE ASSP Workshop on Applications of Signal Processing to Audio and Acoustics, New Paltz, NY, October 20-23, 1991.

S.P. Lipshitz, J. Vanderkooy and R.A. Wannamaker. 1991. Minimally audible noise shaping. J. Audio Eng. Soc. 39, 836-852.

S.P. Lipshitz, R.A. Wannamaker and J. Vanderkooy. 1992. Quantization and dither: A theoretical survey. J. Audio Eng. Soc. 40, 355-375.

----

Their work on dither was so significant that chances are really good that the products you use to listen to music has the results of their work in it! (TPDF dither)

Now, it doesn't mean you automatically agree with both of them. You can dispute what they say but you need to rise up to their level of knowledge. The papers are full of mathematics and deep level of analysis. So this is not a task anyone should take lightly. :)

So let's be done with "what do engineers know" type of comment. Do a bit of work on who these authors are, what their work has been, etc. If that is too hard to do, just ask and I will be happy to explain.
The phenomenon Mitchco was talking about was the time alignment of drivers in a multiway speaker. This is not just a question of phase shift, but actual physical delays. Hint: if I swap a speaker's cables over, what phase shift have I achieved? If I "cascade" 100 such swaps, what phase shift have I achieved? And how much delay?

I don't dispute the status of the various people you are citing, but it is sometimes possible for an idiot like me to 'cut to the chase' because of the fact that this is 2016, and much of this previous work has been carried out decades ago. Without wearing my qualifications on my sleeve, I can make the observation that modern technology renders the arbitrary limitations of passive speakers redundant and I can point out that we can do things the other way round: instead of attempting to prove that linear phase etc. isn't necessary (as people still seem to be demanding), we are now in the position to shift the onus onto those who wish to persist with the ancient craft of coil winding.
 
Last edited:

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,825
Likes
243,131
Location
Seattle Area
The phenomenon Mitchco was talking about was the time alignment of drivers in a multiway speaker.
And that is what the research investigates. I have tried to explain it multiple times. If it is still not understood, then chasing this goal is unwise.

Ultimately, some concepts in audio are too complex if you are not schooled in the field. So in that sense, you need to put your trust in people who do understand the topic and not keep using lay intuition to trump that. Dr. Toole is one such expert. Here again is what he says in his book:

"In the design of loudspeaker systems, knowing the phase behavior of transducers
is critical to the successful merging of acoustical outputs from multiple
drivers [i.e. overall frequency response] in the crossover regions. Beyond that, it appears to be unimportant."


So there is no confusion on my part on what the topic is. I know it, have studied the research and talked to speaker designers about it. The net of it is above: when it comes to speakers, there are far more important concepts than "time alignment." If you think you are hearing effect of time incoherence in a room playing music, it is highly unlikely that you are correct in your assessment.

Put this concept out of your vocabulary or spend the time to become an expert in it as to understand the research and controlled tests performed within. I suggest the former. There is no third choice of simultaneously not understanding the technical topic yet insisting that the research presented is wrong.
 

Cosmik

Major Contributor
Joined
Apr 24, 2016
Messages
3,075
Likes
2,181
Location
UK
And that is what the research investigates. I have tried to explain it multiple times. If it is still not understood, then chasing this goal is unwise.

Ultimately, some concepts in audio are too complex if you are not schooled in the field. So in that sense, you need to put your trust in people who do understand the topic and not keep using lay intuition to trump that. Dr. Toole is one such expert. Here again is what he says in his book:

"In the design of loudspeaker systems, knowing the phase behavior of transducers
is critical to the successful merging of acoustical outputs from multiple
drivers [i.e. overall frequency response] in the crossover regions. Beyond that, it appears to be unimportant."


So there is no confusion on my part on what the topic is. I know it, have studied the research and talked to speaker designers about it. The net of it is above: when it comes to speakers, there are far more important concepts than "time alignment." If you think you are hearing effect of time incoherence in a room playing music, it is highly unlikely that you are correct in your assessment.

Put this concept out of your vocabulary or spend the time to become an expert in it as to understand the research and controlled tests performed within. I suggest the former. There is no third choice of simultaneously not understanding the technical topic yet insisting that the research presented is wrong.
I can cite other luminaries who take the opposite view, but that doesn't particularly interest me. I enjoy the idea that they could all be wrong and we could think it through, and come up with an interesting take on the subject. Engineers and scientists are prone to groupthink, and at the end of the day none of them can tell you why we enjoy listening to music, nor how our brains process it. Why are we sometimes 'on fire' when listening to music, and sometimes not interested? And so on.

(I do walk the walk by the way: I am an experienced speaker designer and DSP engineer with some publications in the 80s.)
 

hvbias

Addicted to Fun and Learning
Joined
Apr 28, 2016
Messages
577
Likes
424
Location
US
I can cite other luminaries who take the opposite view

I'm taking a guess that they are builders of audiophile speakers- Vandersteen, Jim Thiel, etc? :)

Not meant to be snarky, just that IMHO many of these guys are creating their own pseudo science to sell boxes.
 
Last edited:

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,825
Likes
243,131
Location
Seattle Area
I can cite other luminaries who take the opposite view, but that doesn't particularly interest me.
It doesn't interest me unless their views has this type of evidence prior to the paragraph I posted:

"
Others looked at the audibility of group delay (Bilsen and Kievits, 1989; Deer
et al., 1985; Flanagan et al., 2005; Krauss, 1990) and found that the detection
threshold is in the range 1.6 to 2 ms, and more in refl ective spaces.
Lipshitz et al. (1982) conclude, “All of the effects described can reasonably
be classifi ed as subtle. We are not, in our present state of knowledge, advocating
that phase linear transducers are a requirement for high-quality sound reproduction.”
Greenfi eld and Hawksford (1990) observe that phase effects in rooms are
“very subtle effects indeed,” and seem mostly to be spatial rather than timbral.
As to whether phase corrections are needed, without a phase correct recording
process, any listener opinions are of personal preference, not the recognition of
“accurate” reproduction."

Engineers and scientists are prone to groupthink, and at the end of the day none of them can tell you why we enjoy listening to music, nor how our brains process it.

And you think audio hobbyists are not? If I surveyed 100 people at an audio show if "timing is important," I bet 99% would say yes. Yet, none could spell the word. I rather listen to to the people who can. :)

Let's move on please unless you have something concrete to add to the conversation. "I don't believe it" does not count.
 

fas42

Major Contributor
Joined
Mar 21, 2016
Messages
2,818
Likes
191
Location
Australia
Engineers and scientists are prone to groupthink, and at the end of the day none of them can tell you why we enjoy listening to music, nor how our brains process it. Why are we sometimes 'on fire' when listening to music, and sometimes not interested? And so on.
Sorry, that's exactly what the field of Auditory Scene Analysis, ASA, is doing - and making good headway in gaining understanding of the process. Why we're "on fire" at times is that we have an internal model, "schema" is the jargon, of how the sound should be, and when the match up is excellent then the pleasure factor goes sky high - and, the converse is true ...
 

dallasjustice

Major Contributor
Joined
Feb 28, 2016
Messages
1,270
Likes
907
Location
Dallas, Texas
image.png http://www.aes.org/e-lib/browse.cfm?elib=17068
Sensitivity of Human Hearing to Changes in Phase Spectrum

This is a Fraunhofer Institute (very serious) Study on phase perception using listening tests. It turns out there's much to be learned still about how we humans perceive phase anomalies/distortions and we are NOT "phase deaf!" :D

If anyone wants this paper, PM me. I just skimmed it. The authors used their research to develop a perceptual model for how we hear phase. It's pretty complicated.


Of course, this paper doesn't prove or disprove what Mitch is saying. But it's no doubt fair game to put forth a hypothesis based on phase/time perception independent from amplitude.
 
Last edited:

Cosmik

Major Contributor
Joined
Apr 24, 2016
Messages
3,075
Likes
2,181
Location
UK
People don't have much time for logical deduction, it seems to me! There are certain things that can be worked out without resorting to empirical experiments or calling on data from the 1970s. One of them is that, if you can, you don't 'mess' with a recording, just as you wouldn't mess with a painting. What people are saying here is that if an art gallery could show that people are "insensitive" to scale and hue, it would be acceptable to show a reproduction of a painting that was shrunk a bit to fit an available wall, and maybe tinted slightly in order to better match the works surrounding it. If someone troublesome like me objected, they would show me the experimental data that proved that "only in exceptional circumstances" can people detect a 2% change in overall scale and that overall tint is "almost undetectable" to "the vast majority" of art lovers. Sure, if that was the only way to see the painting I would acquiesce and live with it, but given a choice I would want to see the painting in its original form.

Passive speakers, or Beolab 90/Kii Three? Not time aligned or time aligned? I know which I would choose. It's just logical deduction.
 

mitchco

Addicted to Fun and Learning
Audio Company
Joined
May 24, 2016
Messages
645
Likes
2,417
First, welcome to the forum. :)

On your comment here, I am not sure what you are saying. Any time difference can be described in terms of amount of phase shift. The two are one in the same.

Here is Clark's experiment on this showing the interchangeable terminology: “Measuring Audible Effects of Time Delays in Listening Rooms,” Clark, David, AES Convention: 74 (October 1983)

Thanks. Consider these two configurations of the same sound reproduction system.

Config A: 3-way passive minimum phase XO. Each of the drivers acoustic centers are offset in the z- axis. Relative to the tweeter’s measured acoustic center, the midrange acoustic center is 21 samples behind, at 48 kHz sample rate is 1 sample = 0.3” x 21 = 6.3” behind. The woofers acoustic center relative to the tweeters are 2 samples or 0.6”behind. Note that this system is not time aligned, meaning those z-offsets are not corrected, just measured from the linear phase XO in Config B.

Config B: 3-way digital linear phase XO. I take the bass and midrange digital XO, open them up in Acourate and rotate both signals by the number of samples relative to the tweeter sample position from the measurements referenced above. You were once reviewing my eBook, under the advanced chapter is a chapter on Time Alignment – has the process and screen shots of the acoustical peaks being time aligned. In the digital domain, I am moving the impulse peak of the woofer and midrange to line up with the tweeter impulse peak in the time domain.

A couple of important differences, one can observe the mathematical differences between a minimum phase crossover, including delay, versus linear phase crossover in this document. http://files.computeraudiophile.com/2013/1202/XOWhitePaper.pdf Nothing new here, but good foundation review and anyone using a digital FIR filter design software kit will obtain same results.

Note one can use https://sourceforge.net/projects/rephase/ to adjust the amplitude and phase responses of the filter independently for one's loudspeakers to make them phase coherent, but not time aligned. See this article and Example 1: https://www.minidsp.com/applications/advanced-tools/rephase-fir-tool I encourage folks to try it out and see if you can hear a difference. On my passive XO speakers and linearizing the phase, I could not reliably hear a difference. Which agrees with most of the science you have quoted. However, that’s not what I am talking about.

It is with both time alignment and phase coherence that I notice a change in imaging. Using a linear phase crossover represents the phase coherent part, and the aligning the impulse peaks of the drivers is the time alignment or time coherence part. The best I have heard it put is that, “Time-alignment is absolute where as phase-alignment is frequency dependent. In other words, phase-alignment applies to a specific frequency or band of frequencies, where as time-alignment is applied across the frequency spectrum.” Do you agree with that? Please don’t quote me something, let’s hear your critical thinking :)

As Michael says, maybe we could create a repeatable experiment to validate this…
 

Cosmik

Major Contributor
Joined
Apr 24, 2016
Messages
3,075
Likes
2,181
Location
UK
The best I have heard it put is that, “Time-alignment is absolute where as phase-alignment is frequency dependent. In other words, phase-alignment applies to a specific frequency or band of frequencies, where as time-alignment is applied across the frequency spectrum.” Do you agree with that? Please don’t quote me something, let’s hear your critical thinking :)
It is possible to create a speaker system that will give you good coherency between drivers but not reproduce a steady state waveform fed into it without phase distortion.

It is possible to create a speaker system that will reproduce a steady state waveform without phase distortion but not reproduce a transient accurately (direct sound at the listener's position).

Finally, it is possible to create a speaker system that will duplicate the waveform in the recording including transients (direct sound at the listener's position). This involves phase modification for the individual drivers and the correct time delays. This is pretty much what I want from a speaker system - and, indeed, have been listening to for several years. People do seem to find the sound remarkable (in a good way), but in creating such a system, several other aspects are inherently improved too. Are people hearing the absolute time alignment, or just better damping, better inter-channel matching etc.? I don't know, nor am I particularly curious! It is all part of maintaining the recording without trying to analyse it, enhance it, or "see what I can get away with".

Edit:
A frequency domain-centric view of the world might lead a typical audio scientist or engineer to believe that, to the human hearing system, the second category is identical to the third, and might, therefore think that experiments with phase prove that humans are insensitive to phase and timing. As I said before, we really need to compare against truly 'correct' in order to know "what we can get away with". Or, more sensibly, we can just not try to get away with anything!:)
 
Last edited:

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,825
Likes
243,131
Location
Seattle Area
Of course, this paper doesn't prove or disprove what Mitch is saying. But it's no doubt fair game to put forth a hypothesis based on phase/time perception independent from amplitude.
Their data is a deeper dive of what I have been posting. To wit, they used a headphone and synthetic signals to arrive at audibility of phase distortion: "The listening test was performed in a quiet listening room using Sennheiser HD650 headphones." I have shared already that this audibility does exist with headphones and especially with synthesized signals. So this research confirms what we know, and does not dispute it.

The discussion here is whether that detection exists with loudspeakers in rooms that have so much reflections in them as to swamp the phase distortion. And that such phase distortion also existed in the live venue and recording/mixing room. The data so far and implicitly in the research you post by their choice of headphones is that audibility of phase distortion is marginal.
 

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,825
Likes
243,131
Location
Seattle Area
Thanks. Consider these two configurations of the same sound reproduction system.

Config A: 3-way passive minimum phase XO. Each of the drivers acoustic centers are offset in the z- axis. Relative to the tweeter’s measured acoustic center, the midrange acoustic center is 21 samples behind, at 48 kHz sample rate is 1 sample = 0.3” x 21 = 6.3” behind. The woofers acoustic center relative to the tweeters are 2 samples or 0.6”behind. Note that this system is not time aligned, meaning those z-offsets are not corrected, just measured from the linear phase XO in Config B.

Config B: 3-way digital linear phase XO. I take the bass and midrange digital XO, open them up in Acourate and rotate both signals by the number of samples relative to the tweeter sample position from the measurements referenced above. You were once reviewing my eBook, under the advanced chapter is a chapter on Time Alignment – has the process and screen shots of the acoustical peaks being time aligned. In the digital domain, I am moving the impulse peak of the woofer and midrange to line up with the tweeter impulse peak in the time domain.

A couple of important differences, one can observe the mathematical differences between a minimum phase crossover, including delay, versus linear phase crossover in this document. http://files.computeraudiophile.com/2013/1202/XOWhitePaper.pdf Nothing new here, but good foundation review and anyone using a digital FIR filter design software kit will obtain same results.

Note one can use https://sourceforge.net/projects/rephase/ to adjust the amplitude and phase responses of the filter independently for one's loudspeakers to make them phase coherent, but not time aligned. See this article and Example 1: https://www.minidsp.com/applications/advanced-tools/rephase-fir-tool I encourage folks to try it out and see if you can hear a difference. On my passive XO speakers and linearizing the phase, I could not reliably hear a difference. Which agrees with most of the science you have quoted. However, that’s not what I am talking about.

It is with both time alignment and phase coherence that I notice a change in imaging. Using a linear phase crossover represents the phase coherent part, and the aligning the impulse peaks of the drivers is the time alignment or time coherence part. The best I have heard it put is that, “Time-alignment is absolute where as phase-alignment is frequency dependent. In other words, phase-alignment applies to a specific frequency or band of frequencies, where as time-alignment is applied across the frequency spectrum.” Do you agree with that? Please don’t quote me something, let’s hear your critical thinking :)

As Michael says, maybe we could create a repeatable experiment to validate this…
Thanks for the post. I have to run and don't have time to post a proper response. But let me say that I now know what you are arguing, even though you continue to state it in improper technical terms. :) When I get a chance I will state the points and my response to them.

Thanks again.
 

dallasjustice

Major Contributor
Joined
Feb 28, 2016
Messages
1,270
Likes
907
Location
Dallas, Texas
Here's another AES paper more directly applicable to Mitch's hypothesis. Goldmund's "Leonardo" project is a time aligned multi-way speaker system. Goldmund used a digital IIR filter to invert the group delay introduced by a passive crossover in a two way speaker. They claim success and claim listening tests prove it out in terms of spatial detail, etc. It's a bummer that there's no statistical analysis from the listening tests tho.
http://www.aes.org/e-lib/browse.cfm?elib=14096
More from Goldmund:
http://attachments.goldmund.com.s3...._the_leonardo_time_correction_white_paper.pdf
 
Last edited:
Top Bottom