Many high-end audiophiles listen to analog systems with tons of distortion and noise compared to digital. So yes, our brain has many blind spots when it comes to distortion. Perceptual and temporal masking are quite powerful. As is the highly non-linear sensitivity of the ear to low level signals. A distortion that is audible at 3 kHz, would have to be 50 dB higher to be audible at 30 Hz!
Let's remember that our hearing from evolutionary point of view was designed to hear other humans and pray. It was not meant to be a high-resolution electronic recorder. It can do remarkably well at times (e.g. in mid-frequencies) but then be as deaf as a stomp other times. With music being a busy spectrum, many things need to line up for distortion to be audible.
So these test are inconclusive as you cannot differentiate if the subjects simply cannot hear the distortion according to the limits of our hearing mechanisms, or if the subjects cannot hear the distortion because the brain is effectively filtering out the distortion.
Isn't it possible that because people listen to mostly listen to audio equipment that has relatively high levels of noise that they have trained their brain to filter out distortion so this would explain the results of these test? Wouldn't it then also be possible to train your brain listening to audio equipment with relatively lower levels of noise/distortion, so that when you do these test, they can then readily identify the relatively high levels of noise distortion?
In other words the subjects, including the so called audiophiles, have already been preconditioned to these test by their experience with common audio equipment and not so common audio equipment with relatively high levels of distortion which would render the test inconclusive.
Last edited: