Thanks for your reply. Let me clarify about CD pressings: I'm not aware of any evidence that particular pressing plants uniformly and consistently produced CDs whose physical properties produced jitter that was consistently inferior, to a degree that would be likely or plausible to be audibly detected, to CDs of the same albums pressed by other plants. For example, the notion that Led Zeppelin CDs pressed by Daio Kosan vs DADC vs WEA/SRC sounded different is a notion that, as far as I'm aware, has no evidence to back it up. Now, I have no doubt that if we found the right combination of discs and CD player models, we might be able to find certain batches of pressings from certain plants that might produce higher measurable jitter than other batches of pressings from other plants when played back on some CD players. Even then I would be skeptical that such differences would be audible - and I would be skeptical that the jitter would be in the "Golilocks" zone of badness whereby it would impact things like transients or soundstage and yet not be bad enough were it would fail to produce digital clicks or pops that would indicate it was tripping up the player's ability to properly read the data.
But even if the jitter were in that Goldilocks zone, that is still a far cry from the claims that are regularly made - and which I was referring to in my prior post - that all of the discs pressed by certain plants across spans of years at a time are consistently audibly discernible from the same albums pressed to CD by other plants. That is a much higher bar, and skepticism about it has nothing to do with close-mindedness.
As for the idea that the difference between 44.1 and higher sampling rates can be ABX'd, of course it can - we can test folks' ability to hear those differences. But testing for differences is not the same as finding them. And finding them does not tell us what folks were actually hearing if they were able to discern differences to a statistically significant degree. For example, if the original digital source is high-res and the 44.1k comparison is a resampled and dithered version of that original, you will find slightly different peak-level values (which are often more irregular on the 44.1k version and can even lead to DR values 1dB higher on 44.1k resamples of 96l originals). And there will be some alteration to the data from the dither as well. So in a scenario where differences were discernible between such files, is the difference the "higher resolution" and fidelity of high-res vs redbook? Or is the difference due to the fact that the combination of resampling and dithering alters the data in some way that is not 100% audibly transparent? Put another way, if a difference can be discerned between the two, does that automatically mean the high-res version is better? And if the answer - as I think it has to be - is No, then doesn't that throw the entire purpose of the comparison into question? (All of this is leaving aside, for the moment, the question of whether there are even valid studies with truly sound methodologies that have demonstrated a statistically significant audible difference between 44.1k and 96k sources.)
My point here is that while openness to new possibilities and further investigation is both good and the core of the entire scientific method, there's a lot of both-sides-ing in these debates, wherein certain important issues are ignored or discarded in an attempt to make certain implausible possibilities look like plausible possibilities or even things that have a 50-50 or better chance of being true.