• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

Are you Euphonophile?

Sal1950

Grand Contributor
The Chicago Crusher
Forum Donor
Joined
Mar 1, 2016
Messages
14,208
Likes
16,955
Location
Central Fl
It is supposed to "sound good" to the listener ..if it doesn't then no amount of discourse about accuracy and fidelity to the source wil sway the listener. The truth is that there is room for accuracy and good sound. Actually ..accuracy sounds good. We may need to define hence what realy is accuracy... and this is not a flat FR at the listener ears in a room. A flat FR sounds strident for most recording.. A tilted FR has been shown by serious researches to be preferable. How to achieve it has been the issue and there are various roads to this result.
We do have to keep that into perspective though. Accurate recordings for the most part sound good on a accurate system but a lot of recordings, for a variety of reasons sound bright and edgy. Also now we've focused on one very narrow aspect of reproduction, top end FR. Tame it a bit and the results sound smoother, tame it more and it's just plain rolled off. :(
Beyond that there's accurate or distorted, the high end today chooses to voice many components in some pleasing manner and call it "better or more real" when it's really just distorted. We mostly all take that path in one form or another if only in our speaker choices. When I turn on the 5.2.4 upmixing on my rig I know it's miles from accurate and that's OK for me, with the plus that I can turn it all off. Let's just not stray from the path so far that we no longer respect or recognize transparent, since it's only down that path that we can improve the SOTA.
 

andreasmaaan

Master Contributor
Forum Donor
Joined
Jun 19, 2018
Messages
6,652
Likes
9,408
We do have to keep that into perspective though. Accurate recordings for the most part sound good on a accurate system but a lot of recordings, for a variety of reasons sound bright and edgy. Also now we've focused on one very narrow aspect of reproduction, top end FR. Tame it a bit and the results sound smoother, tame it more and it's just plain rolled off. :(
Beyond that there's accurate or distorted, the high end today chooses to voice many components in some pleasing manner and call it "better or more real" when it's really just distorted. We mostly all take that path in one form or another if only in our speaker choices. When I turn on the 5.2.4 upmixing on my rig I know it's miles from accurate and that's OK for me, with the plus that I can turn it all off. Let's just not stray from the path so far that we no longer respect or recognize transparent, since it's only down that path that we can improve the SOTA.

Great post. I should qualify everything I've said in favour of "euphonic" distortion with the sidenote that it's always also my aim to have a system that is capable of being as transparent as possible when I desire that of it.
 

TBone

Major Contributor
Joined
Mar 16, 2016
Messages
1,191
Likes
348
We do have to keep that into perspective though. Accurate recordings for the most part sound good on a accurate system but a lot of recordings, for a variety of reasons sound bright and edgy. Also now we've focused on one very narrow aspect of reproduction, top end FR. Tame it a bit and the results sound smoother, tame it more and it's just plain rolled off. :(

Funny how this very scenario has played out in my system over the years. With early CD players (and some players still in rotation today), bright and edgy is a relatively common complaint given a specific source, especially early CD masters, those containing very high dynamic content. Now, decades later, many of my best sounding CD's are those same high DR CD's that I once thought bright/edgy.

What's bright & edgy to my ears - currently - are many of the compressed re-masters, which not only sound "wrong" to me in terms of reproducing certain musical instruments (such as a hard hit cymbal), but also in terms of advanced volume, I just can't play 'em louder without cringing.

I have 5 digital players, and still prefer just 2, and even at that, it depends on what CDs I'm playing. I've had two recent DACs in my system, both relatively cheap models (<$500), in which both, surprisingly, fell into the cringing camp. I thought my old Rega CDP so colored as to make everything sound a tad more reverberant.

For better or worse, I certainly don't fall into the "all digital sounds the same to me camp".

Trying to understand such preferences, months ago, I did a little experiment, in which I ripped each CD player/DAC (then available) directly into my rec. and applied the same kind of measurements as when compare LP vs CD (but dug much deeper in samples/freq considering how much smaller the differences). Below is one example, the results did change - somewhat more on one player than all the others - based on which CD was played (I only tried ~10 CDs, some originals, 1 with pre-emphasis, and recent remasters, so a very small sample size).

At lower frequencies, differences are much less, but differences do become more prominent as frequency rises ...

The white broken line trace is the actual CD. The blue is a relatively cheap DVD player, the green a relatively expensive CDP. They measure quite closely to the actual CD signal, and funny enough, neither of these players did I find to be subjectively "bright or edgy".

1535563743195.png


Below is a circa 1992 CDP (brown) and a usb DAC (red), both in which I do find subjectively "bright & edgy".

1535564095201.png







.
 

andreasmaaan

Master Contributor
Forum Donor
Joined
Jun 19, 2018
Messages
6,652
Likes
9,408
@TBone could you explain exactly what you're plotting on those graphs? Vertical axis is dB I presume? What's the horizontal?

And PS which two USB DACs are represented? Thx :)
 

TBone

Major Contributor
Joined
Mar 16, 2016
Messages
1,191
Likes
348
horizontal is the specific measurement at 10khz, plus minus 10hz.
 

andreasmaaan

Master Contributor
Forum Donor
Joined
Jun 19, 2018
Messages
6,652
Likes
9,408
Still not sure I get it, sorry...

Is this a correct notation of the horizontal axis then?

1535564095201.png
 

TBone

Major Contributor
Joined
Mar 16, 2016
Messages
1,191
Likes
348
yes, sorry, I should have included such data initially.
 

andreasmaaan

Master Contributor
Forum Donor
Joined
Jun 19, 2018
Messages
6,652
Likes
9,408
yes, sorry, I should have included such data initially.

Interesting, thanks.

It seems that in the latter two cases (the second graph) we have the DAC outputting each sample at a slightly higher frequency than on the recording (perhaps 1Hz higher @ 10KHz), with the orange DAC getting the amplitude about right and the red DAC outputting about 0.3dB lower on average. Am I understanding the graph correctly?

What explains what's going on in your opinion?
 

TBone

Major Contributor
Joined
Mar 16, 2016
Messages
1,191
Likes
348
Interesting, thanks.

It seems that in the latter two cases (the second graph) we have the DAC outputting each sample at a slightly higher frequency than on the recording (perhaps 1Hz higher @ 10KHz), with the orange DAC getting the amplitude about right and the red DAC outputting about 0.3dB lower on average. Am I understanding the graph correctly?

You do. As frequency rises, the frequency error also rises towards an even higher frequency, although >10khz it is arguably less audible.

What explains what's going on in your opinion?

Don't know, nor do I have an opinion on such matters in which I have very limited knowledge. I do think, however, the one player that consistently measured most differently (relatively expensive CDP, green trace) depending on which CD was played, includes proprietary noise-shaping techniques ...
 

Sal1950

Grand Contributor
The Chicago Crusher
Forum Donor
Joined
Mar 1, 2016
Messages
14,208
Likes
16,955
Location
Central Fl
I have 5 digital players, and still prefer just 2, and even at that, it depends on what CDs I'm playing. I've had two recent DACs in my system, both relatively cheap models (<$500), in which both, surprisingly, fell into the cringing camp. I thought my old Rega CDP so colored as to make everything sound a tad more reverberant.
So then are you confident you could identify these differences in a bias controled blind test, or could they just as easily be bias induced?
 

andreasmaaan

Master Contributor
Forum Donor
Joined
Jun 19, 2018
Messages
6,652
Likes
9,408
Don't know, nor do I have an opinion on such matters in which I have very limited knowledge. I do think, however, the one player that consistently measured most differently (relatively expensive CDP, green trace) depending on which CD was played, includes proprietary noise-shaping techniques ...

Most differently or most similarly? It seems that the orange and red traces measure most differently from the original.
 

Blumlein 88

Grand Contributor
Forum Donor
Joined
Feb 23, 2016
Messages
20,784
Likes
37,678
At 10 khz, 1 khz higher frequency is 100 ppm. It would not be at all uncommon for this level of speed difference. The ADC may or may not be slow itself. A slow ADC would make you think frequency was higher.

So what you are showing then is plots from a 64k FFT over just those 20 hz around 10 khz. Is that for the entire track, or a few seconds?

In any case, other than a slight speed difference, and maybe a slight droop in high frequency response I'm not sure it is telling you anything useful about why one sounds bright and edgy or not. The speed differences will shift the same signal inside each bin slightly and can alter slightly the level of the following FFT bin. By this I mean suppose the peak signal at the 10 khz bin is dead on 10 khz. Yet another version the peak signal is at 10,000.5 hz. Windowing allows some bleed over into the next bin. So it might very slightly lower the level in the 10,000 hz bin and very slightly raise the level in the 10,001 hz bin of the FFT. (yes some simplification as I know bins aren't actually 1 hz wide here).
 

Cosmik

Major Contributor
Joined
Apr 24, 2016
Messages
3,075
Likes
2,180
Location
UK
Where does the 44.1 kHz come from? From the following crystal frequencies.

The first in the list is the most interesting.

4.43361875 MHz
PAL B/D/G/H/I and NTSC M4.43 color subcarrier. Also used in Compact Disc players and recorders where the crystal frequency is slightly pulled to 4.41 MHz and then divided by 100 to give the 44.1 kHz sampling frequency.

5.6448 MHz
Used in CD-DA systems and CD-ROM drives; allows binary division to 44.1 kHz (128×44.1 kHz), 22.05 kHz, and 11.025 kHz. Frequencies also used (multiplies of 5.6448) are 11.2896 MHz, 16.9344 MHz, 22.5972 MHz, 33.8688 MHz and 45.1584 MHz.

8.4672 MHz
Used in CD-DA systems and CD-ROM drives; allows integer division to 44.1 kHz (192×44.1 kHz), 22.05 kHz, and 11.025 kHz. Also allows integer division to common UART baud rates up to 115200. Frequencies also used are 11.2896 MHz, 16.9344 MHz, 22.5972 MHz, 33.8688 MHz and 45.1584 MHz.

14.112 MHz
Digital audio systems - 294×48 kHz, 320x44.1 kHz. Also allows integer division to common UART baud rates up to 19200. Available as TCXO.

17.2032 MHz
PLL conversion by 10/7 to 24.576 MHz and by 21/16 = 22.5792 MHz, which are 256× audio sampling frequencies 48 kHz and 44.1 kHz, respectively

Standard crystals can start with +/-20 or 30 ppm absolute tolerance at, say 25 deg C, with +/-50 ppm over the operating temperature range, plus additional deviations for ageing, etc.

I don't think you'd hear such a tiny shift in pitch.

Edit: As an aside, if you are armed with a schematic and data sheets for the components, you can judge how a circuit will perform without having to measure it. But if all you have is a black box and are using 'science' to characterise it, you cannot possibly cover all the permutations. Your experiment may show that the sample frequency is very accurate, but it cannot show that another box off the production line is much worse, or that some boxes vary with temperature more than others.

This is just a general moan about the way 'science' is viewed as the answer to the audiophile problem, when really it is a lot simpler to analyse things at source. Given some speakers, you could measure their frequency response in a room and attempt to derive some measure of 'performance' or 'quality'. But if, instead, you created a model of the speaker based on its drivers and dimensions, you would be able to predict what it would do in any room. Or, by doing the Spin-o-rama measurement you would be able to do the same thing.

But audiophile science isn't even this clever. It goes one step further into the exponentially expanding number of permutations that are generated as you move away from knowledge of the design, and asks humans to listen to a setup playing certain extracts of music at certain volume levels in a certain room at a certain temperature at a certain distance from the speaker, etc. etc. in reality only characterising a minuscule pocket of the total possible 'experimental space'.
 
Last edited:

andreasmaaan

Master Contributor
Forum Donor
Joined
Jun 19, 2018
Messages
6,652
Likes
9,408
I don't think you'd hear such a tiny shift in pitch.

Totally agree. Perhaps this error is indicative of other, actually audible issues (but ofc perhaps not :))

Edit: As an aside, if you are armed with a schematic and data sheets for the components, you can judge how a circuit will perform without having to measure it. But if all you have is a black box and are using 'science' to characterise it, you cannot possibly cover all the permutations. Your experiment may show that the sample frequency is very accurate, but it cannot show that another box off the production line is much worse, or that some boxes vary with temperature more than others.

But how were the specifications on the datasheet generated in the first place, if not by scientific means (measurement)?

But audiophile science isn't even this clever. It goes one step further into the exponentially expanding number of permutations that are generated as you move away from knowledge of the design, and asks humans to listen to a setup playing certain extracts of music at certain volume levels in a certain room at a certain temperature at a certain distance from the speaker, etc. etc. in reality only characterising a minuscule pocket of the total possible 'experimental space'.

Yes, but good listening studies avoid these issues by controlling all variables other than that being tested.

Of course, you may object to the results of a single study by saying (for example): "X% 2nd harmonic distortion was inaudible on those music samples at 20°, but what about at 25°?" If you really believe the change in temperature would make a difference to the results, then repeat the experiment at a different temperature.

Remove listening studies from the equation and you're in the dark.

I recall you saying, for example, that many DACs and amplifiers are transparent, even those that do not perform absolutely exceptionally when objectively measured. I agree, but how could we know that without relying on the outcomes of listening studies?
 
Last edited:

Cosmik

Major Contributor
Joined
Apr 24, 2016
Messages
3,075
Likes
2,180
Location
UK
But how were the specifications on the datasheet generated in the first place, if not by scientific means (measurement)?
The point is not that science wasn't used to create the data sheet, but that someone has already done it for you, controlling all the relevant variables. They have made objective measurements (or more directly controlled the manufacturing process to achieve specifications) and with this information you can be certain that your CD player will have a sample frequency that lies between A and B. There is no point in listening to it in order to establish that the sample frequency is acceptable when the tolerances are below the known human ability to resolve pitch differences or absolute pitch. And that experiment can be performed separately with a variable clock generator rather than trying out different CD players.

It's all about controlling the variables that you can in order to reduce the 'experimental space' that you have to cover with your listening tests (if you insist on using them :)).

Yes, but good listening studies avoid these issues by controlling all variables other than that being tested.
I detect a misunderstanding...

Suppose there's a listening test to determine whether humans can detect phase differences. If I play music at some subjects using a conventional system (which has in-built arbitrary phase shifts and timing errors, but I don't specify that, instead stating it is 'typical') and then I change some phase aspects of the signal and repeat, at no time will I have played an error-free version to the subject. If the subjects don't hear any difference and my conclusion is that "Humans cannot detect phase because phase was the only variable I changed", then I think this conclusion is unwarranted. Ditto if I word my conclusion very precisely, but allow others to infer the wider conclusion without correcting them.

Similarly if I play speakers at the subjects, changing only the speaker type, but keeping the location and orientation identical for all speakers. It would appear that the only variable that has changed is the speaker type. But of course some speakers are designed to be near walls, and others away from them. Ditto their toe-ing in and -out.

As soon as you enter the 'scientific' arena, you are in a minefield of astronomical numbers of permutations, or you test a tiny, tiny pocket of the 'experimental space' by fixing many of the variables. If you do the latter, it is a bit underhand to allow people to read wider conclusions into your experimental results - which they naturally do, and then they say that 'science' has demonstrated something universal.

If you can, instead, work everything out from theory, data sheets and models, you stand a chance of understanding what is going on.
 

andreasmaaan

Master Contributor
Forum Donor
Joined
Jun 19, 2018
Messages
6,652
Likes
9,408
Yes, I think we do have a misunderstanding :) You're giving examples of pseudo-scientific studies, ones that do not properly control variables.

For phase audibility to be properly studied, for example, the baseline system must be phase correct. Otherwise, it's just bad science.

There is no point in listening to it in order to establish that the sample frequency is acceptable when the tolerances are below the known human ability to resolve pitch differences or absolute pitch. And that experiment can be performed separately with a variable clock generator rather than trying out different CD players.

My bold. This human ability to resolve these differences is known only as a result of scientific listening studies. The specifications are meaningless without the context provided by these studies.
 

TBone

Major Contributor
Joined
Mar 16, 2016
Messages
1,191
Likes
348
At 10 khz, 1 khz higher frequency is 100 ppm. It would not be at all uncommon for this level of speed difference. The ADC may or may not be slow itself. A slow ADC would make you think frequency was higher.[

So what you are showing then is plots from a 64k FFT over just those 20 hz around 10 khz. Is that for the entire track, or a few seconds?

These recordings, in this case, no pre-amp (used pre-amp for similar testing of digital sources prior) was used, each digital source was hooked directly to the rec. using the very same cable. The recorders Sony SBM (noise shaping) feature was defeated. Each source was allowed ~10 minutes warmup. Recorded ~10 CD's cuts (entire song). Audacity plot spectrum analyzed the first 237 seconds of each song.

This was done some time ago so ... I may have saved all the other cuts on hard drive somewhere (or even on my S3) but for now, all I could find was the above Pretenders cut/data. Which is too bad, since one song in particular (can't remember which, but it wasn't the pre-emphasis CD), really displayed some major differences, with one player in particular.

The speed errors in the 1992 CDP (brown) and usb DAC (red) remained consistent no matter which CD cut was displayed, and they always showed >speed shift with increasing frequency. The other players (blue, green traces) were consistent in having no discernable speed shift, with any cuts.

It should also be noted that I had long thought the 1992 CDP bright & edgy, and even this particular usb DAC was considered as such prior to this test. So, perhaps you may think you can't hear a very slight shift in pitch in the entire upper frequency range (starting at ~5K iirc) but ...
 

Cosmik

Major Contributor
Joined
Apr 24, 2016
Messages
3,075
Likes
2,180
Location
UK
Yes, I think we do have a misunderstanding :) You're giving examples of pseudo-scientific studies, ones that do not properly control variables.
Sounds like the No True Scotsman argument :). No true scientist would ever publish anything that was pseudoscientific. Nevertheless, such pseudoscience does get published, and the fact it is published implies that it is science, and people quote it as though it is science, and whether or not it is science or pseudoscience is a subjective judgement.

And what is this "control of variables"? If something is constant - but complex - (e.g. the built in phase errors in passive crossover-based speakers) is that then controlled, or fixed, variable or not variable? If it is not completely understood (stuff goes on in speakers that is very complex, especially if stretching the drivers' capabilities) does that invalidate the experiment?

For phase audibility to be properly studied, for example, the baseline system must be phase correct. Otherwise, it's just bad science.
No system will ever be truly phase correct - and distortion-free, compression-free, noise-free.

But from what you were saying in previous threads, as long as the scientist reports what he did and doesn't draw any unwarranted conclusions, it is science. Good or bad is a subjective value judgement.
 

andreasmaaan

Master Contributor
Forum Donor
Joined
Jun 19, 2018
Messages
6,652
Likes
9,408
No system will ever be truly phase correct - and distortion-free, compression-free, noise-free.

You say that a system cannot be phase correct, distortion-free, etc. But empirical evidence is required to determine whether (and the extent to which) a system is phase incorrect, distortion-free, etc.

To obtain this empirical evidence requires instruments, which themselves need to be calibrated by reference to other instruments.

We're now faced with a circularity problem. Should we give up on measurements completely too? Why is evidence obtained via listening more flawed than evidence obtained via instrumental measurement, given neither can be established to be reliable in absolute terms?

But from what you were saying in previous threads, as long as the scientist reports what he did and doesn't draw any unwarranted conclusions, it is science. Good or bad is a subjective value judgement.

This is a misunderstanding I think. Good or bad is not a purely subjective judgement. If a scientific experiment cannot be repeated or replicated, it is bad. If it does not adequately control for variables, it is bad. If it puts forward a hypothesis that is by nature impossible to falsify, it is bad. There are other reasons too of course, and there are degrees of good and bad, as in everything.

I acknowledge that science can never reach the objective truth. That's not the point. It is flawed, yes, but once you start putting any kind of empiricism into the topic of interest (which necessairily includes human perception in the case of audio), it's the best we have.

Let me try a different track - how do you suppose to back up your previous statement:
"There is no point in listening to it in order to establish that the sample frequency is acceptable when the tolerances are below the known human ability to resolve pitch differences or absolute pitch."

...without reference to scientific studies involving listening?

*Sorry, edited that a bunch of times, kept noticing mistakes ;)
 
Last edited:

Fitzcaraldo215

Major Contributor
Joined
Mar 4, 2016
Messages
1,440
Likes
634
The point is not that science wasn't used to create the data sheet, but that someone has already done it for you, controlling all the relevant variables. They have made objective measurements (or more directly controlled the manufacturing process to achieve specifications) and with this information you can be certain that your CD player will have a sample frequency that lies between A and B. There is no point in listening to it in order to establish that the sample frequency is acceptable when the tolerances are below the known human ability to resolve pitch differences or absolute pitch. And that experiment can be performed separately with a variable clock generator rather than trying out different CD players.

It's all about controlling the variables that you can in order to reduce the 'experimental space' that you have to cover with your listening tests (if you insist on using them :)).


I detect a misunderstanding...

Suppose there's a listening test to determine whether humans can detect phase differences. If I play music at some subjects using a conventional system (which has in-built arbitrary phase shifts and timing errors, but I don't specify that, instead stating it is 'typical') and then I change some phase aspects of the signal and repeat, at no time will I have played an error-free version to the subject. If the subjects don't hear any difference and my conclusion is that "Humans cannot detect phase because phase was the only variable I changed", then I think this conclusion is unwarranted. Ditto if I word my conclusion very precisely, but allow others to infer the wider conclusion without correcting them.

Similarly if I play speakers at the subjects, changing only the speaker type, but keeping the location and orientation identical for all speakers. It would appear that the only variable that has changed is the speaker type. But of course some speakers are designed to be near walls, and others away from them. Ditto their toe-ing in and -out.

As soon as you enter the 'scientific' arena, you are in a minefield of astronomical numbers of permutations, or you test a tiny, tiny pocket of the 'experimental space' by fixing many of the variables. If you do the latter, it is a bit underhand to allow people to read wider conclusions into your experimental results - which they naturally do, and then they say that 'science' has demonstrated something universal.

If you can, instead, work everything out from theory, data sheets and models, you stand a chance of understanding what is going on.
I realize you do not like the concept of statistical inference from samples to larger populations at all. Perhaps you missed or fell asleep in some math or statistics courses.

So, let's design the perfect drug for a disease from specs based on known science and test tube lab experiments, then we can skip all those noisy, messy experiments on samples of animals and clinical trials on samples of humans. Just go right to market with it. Testing via sampling is useless anyway, since it does not consider all possible permutations of actual use, concurrent or prior conditions, etc. Our scientific understanding most assuredly has perfect foresight by merely designing the drug with our perfect knowlege of every single one of its potential effects under any circumstances. We already know everything. The science is already perfect. No need to mess with pesky experiments and trials on samples of unruly living creatures who will take the drug.

I am not saying audio is nearly as complex or as dangerous. It is much simpler, but still, the sheer number of possible variables, blind spots or distortions in our understanding in design, hidden gotchas, etc. is still quite formidable, even if we were to assume the technical, scientific understanding is perfect, which I don't believe for a moment. The day any field of science believes it knows everything there is to know is the day that science in that field dies.

Your argument seems to revolve around the idea that testing of samples is invalid since the samples are only limited subsets of the much larger total population set of possible outcomes. Meanwhile, you imply, theoretical, non-empirical science is more perfect in its predictive abilities by virtue of its much more complete understanding of the totality of all potential processes involved and cause/effect. Hence, science need not bother to verify via testing with imperfect, mere subset samples on potential users and consumers. The best, brightest and smartest humans can absorb and interpret absolutely every speck of the science perfectly and suffienctly, and from that completely and comprehensively specify a new solution requiring absolutely no testing or verification.

Ergo, "solved in theory, therefore solved" is all we need as a mantra.

Hogwash.
 
Top Bottom