• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

oh dear... "Cable Pathways Between Audio Components Can Affect Perceived Sound Quality"

Standard glass is used to level the competition among professionals. .

Competition?? AFAIK people in the trade are doing business. What does scoring 100% mean in your test, the participants all can't tell the difference in the "wrong" glasses but are 100% correct in the "right" glasses? You don't know GDK and SIY I gather.
 
Competition?? AFAIK people in the trade are doing business. What does scoring 100% mean in your test, the participants all can't tell the difference in the "wrong" glasses but are 100% correct in the "right" glasses? You don't know GDK and SIY I gather.
You misunderstood.

I created a "wine pathway" analogy - a combination of wine and enhancing stemware (cable and transmission standard in the paper in question, or "analogue pathway"). A combination of different tasting wine and taste enhancing stemware allowed for a higher identification threshold.

I really don't know many people here as I am relatively new.
 
Everyone should understand that unlike ASTM, medical and IEEE standards, perceptual test protocols are purposefully flexible. There are many test models and methodologies.
We accept flexibility. Speaker tests for preference are different than ABX test of electronics. And codec testing is yet again different than both.

This test is not that. It starts by saying everything about current tests are invalid because they show the two cables to sound the same. So let's create a new test where we mix in different interfaces, use obsolete and expensive audio hardware no one has quick access to do to verify the results, and run with it as cables sounding different. Make the work sloppy like this and you are going to get criticism especially since you claim you are the first to find such low thresholds of detection.

Noting about the outcome of this test is plausible. So it requires and demand more scrutiny. Your notion that the outcome is just dandy so we should not criticize it does not at all follow accepted practice in this regard.
 
The experimentor chooses very different wines - a lush and wood-aged California chardonnay and an ultra dry French Sancere. In order to maximize his potential for success, he teaches his subject to look for distinguish qualities in each wine - a buttery taste and woody finish in one wine and a minerally flavor with a clean finish on the other.

Armed with this quick training and the enhancing qualities of the stemware, his subject group scores a perfect score in 50 trials.
First, the listeners did not get perfect score here. They missed some instances. And each subject only took the test 3 times which means even if they got it all right, it is not statistically meaningful. Which the author accepts. His basis for statistical validity was that if we combine all the outcomes and with it, have more trials, then enough right answers were given to provide confidence in the results. Such combination in statistics is only allowed if all the tests are identical in each regard. It is not clear if this was achieved in the test.

Also, you keep saying this is training. It is not. Training is toward reality. We trained people for speaker preference by on purpose applying an EQ to the music being played. There is zero doubt what is being done with the EQ is real and leads to training in what that impairment sounds like. What you just said could be just made up as is the case in the test. Again, assuming that I think one ice cream is woody and the other grassy and I tell you that about vanilla ice cream. Why on earth would you trust that? On what basis that is real? This is just leading the witness to what you want them to say, not training.
 
You misunderstood.

I created a "wine pathway" analogy - a combination of wine and enhancing stemware (cable and transmission standard in the paper in question, or "analogue pathway"). A combination of different tasting wine and taste enhancing stemware allowed for a higher identification threshold.

I really don't know many people here as I am relatively new.

I don't your analogy holds up in real life. Double blind testing of wine by both experts and amateurs has a...less than stellar history. https://en.wikipedia.org/wiki/Blind_wine_tasting#Professional_tasting_judges

OT: I think the word you are reaching for so often is "acuity", not "aquity". "Aquity" is not a word.
 
I just totaled the statistics.

# of runs = 55
# of correct answers = 40

Minimum number to get right for 95% confidence is 34. So they had 6 more right answers than this which if the test is run correctly, is definitely statistically significant (> 99% confidence).

By lay standards and that of subjectivists though, there was no clear difference or they should have gotten almost all right.
 
We accept flexibility. Speaker tests for preference are different than ABX test of electronics. And codec testing is yet again different than both.

This test is not that. It starts by saying everything about current tests are invalid because they show the two cables to sound the same. So let's create a new test where we mix in different interfaces, use obsolete and expensive audio hardware no one has quick access to do to verify the results, and run with it as cables sounding different. Make the work sloppy like this and you are going to get criticism especially since you claim you are the first to find such low thresholds of detection.

Noting about the outcome of this test is plausible. So it requires and demand more scrutiny. Your notion that the outcome is just dandy so we should not criticize it does not at all follow accepted practice in this regard.
This is simply a question of design space and test objectives.

The common DBT protocols constrained it and have proven very difficult to demonstrate a statistically significant difference.

The author expanded the design space and used extreme examples to demonstrate that it can be done. There is nothing wrong with this as a unique example of capability.

You have learned how to pass ACC/FLAC DBTs. To demonstrate that it can be done.

Good for you!
 
I have no problem with the confidence interval. But the test should be called ' we prove that listeners can hear a difference between two different inputs and cables on a high bandwidth amplifier when tested using cables not approved by the amp manufacturer because they do not conform to the peculiar electrical specification of the DUT'
 
I don't your analogy holds up in real life. Double blind testing of wine by both experts and amateurs has a...less than stellar history. https://en.wikipedia.org/wiki/Blind_wine_tasting#Professional_tasting_judges

OT: I think the word you are reaching for so often is "acuity", not "aquity". "Aquity" is not a word.
Thank you, I need these spelling corrections from time to time. Very embarrassing, really.

The interesting thing about that study's author is that he became a wine maker!
 
The common DBT protocols constrained it and have proven very difficult to demonstrate a statistically significant difference.

The author expanded the design space and used extreme examples to demonstrate that it can be done. There is nothing wrong with this as a unique example of capability.
What? His "protocol" is the same as any other double blind test:

1623973593386.png


Where it differed was that he gave the listener instructions that were based on sighted listening, i.e. totally unreliable, and forced them to pick or the other alternative, no matter they actually perceived. Again, they were asked to say if vanilla ice creams being tested are sour or salty. There has never been an argument over said ice cream being one or the other.

The test lacked a control which would have been extremely useful. He should have first run the test with monster cable for both alternatives. If he had shown that to generate random results, and then switched to one being the fancy cable, then we would have something. But we don't. We just have a wild outcome that is super hard for us to analyze.

No new ground was taken, only a step backward in how you conduct a proper listening test.
 
What? His "protocol" is the same as any other double blind test:

View attachment 136239

Where it differed was that he gave the listener instructions that were based on sighted listening, i.e. totally unreliable, and forced them to pick or the other alternative, no matter they actually perceived. Again, they were asked to say if vanilla ice creams being tested are sour or salty. There has never been an argument over said ice cream being one or the other.

The test lacked a control which would have been extremely useful. He should have first run the test with monster cable for both alternatives. If he had shown that to generate random results, and then switched to one being the fancy cable, then we would have something. But we don't. We just have a wild outcome that is super hard for us to analyze.

No new ground was taken, only a step backward in how you conduct a proper listening test.
I agree, the test would have benefited from an "unadvised" control group.

However, lack of one doesn't invalidate demonstrated statistical significance. If his subjective descriptors were irelevant the results would have been null. The fact that they weren't are prima facia evidence that they were helpful - i.e., he educated his subjects to improve their acuity (I got that word right this time - thank you, @jsrtheta !

And I must insist your vanilla ice cream analogy is incorrect (and I love vanilla!). Incorrect/Irelevant descriptors against identical phenomena results in statistically irrelevant results, by definition. They can only increase statistical significance if they align correctly with the effect in the DOE and if the A and B aren't the same. And this is education, again by definition.

Just like your sighted education on AAC/FLAC differences inform your current excellent DBT scoring.

Education and knowledge is good. :)
 
Last edited:
As long as we have gone down the path of making wine tasting an analogy to whatever it is we do on audio forums, I will have no choice except to insist that everyone spend a little time watching one of my favorite old episodes of Columbo. The episode is titled, "Any old port in a storm". It is a fun episode, and you can watch it for free as long as you don't mind some interruption for advertising.

https://www.imdb.com/title/tt0069901/?ref_=ttep_ep2
 
Last edited:
As long as we have gone down the path of making wine tasting an analogy to whatever it is we do on audio forums, I will no choice except to insist that everyone spend a little time watching one of my favorite old episodes of Columbo. The episode is titled, "Any old port in a storm". It is a fun episode, and you can watch it for free as long as you don't mind some interruption for advertising.

https://www.imdb.com/title/tt0069901/?ref_=ttep_ep2
In defense, it was offered as a "toy" experiment. We use them to send examples to our software vendors, since we can't share actual models due to security/ITAR regulations.

:) And of course, Colombo is a master!

Though drinking wine while listening is a well understood enhancement to any listening test!

Wait what?
 
Last edited:
Back
Top Bottom