I think the problem here with this, is relying on instruments only to measure harmonic & sympathetic resonance, which is more difficult than using one's trained listening ears to hear how higher frequencies of music filter down in effect to the lower 'human hearing range.'
However, to be polite, it may be best to call this a 'theory' rather than 'empirical fact', as empirical science is limited by the tools it has to hand, as much as by the closed minds that can't contemplate anything out side their usual accepted paradigm.
The frequency range of a microphone is defined as the interval between its upper limiting frequency and its lower limiting frequency. With today’s microphones you can cover a frequency range starting from around 1Hz and reaching up to 140 kHz.
If this is all completely new to one, then one might like to read
https://en.wikipedia.org/wiki/Sympathetic_resonance and
https://en.wikipedia.org/wiki/Acoustic_resonance to get an understanding of the basics. Then extrapolate to high frequencies beyond human hearing& now be able to appreciate how they are affecting lower harmonics that are within the range of human hearing.
At least one member of each instrument family (strings, woodwinds, brass and percussion) produces energy to 40 kHz or above see paper:
https://www.cco.caltech.edu/~boyk/spectra/spectra.htm
Please notice what this paper says about cymbal crashes being beyond their measurement limit of 102.4 kHz . For me it is the sound of the cymbals that is most obvious when appreciating how much better good 'Hi-Res; is compared to redbook.
Significance of the results
Given the existence of musical-instrument energy above 20 kilohertz, it is natural to ask whether the energy matters to human perception or music recording. The common view is that energy above 20 kHz does not matter, but AES preprint 3207 by Oohashi et al. claims that reproduced sound above 26 kHz "induces activation of alpha-EEG (electroencephalogram) rhythms that persist in the absence of high frequency stimulation, and can affect perception of sound quality."
[4]
Oohashi and his colleagues recorded gamelan to a bandwidth of 60 kHz, and played back the recording to listeners through a speaker system with an extra tweeter for the range above 26 kHz. This tweeter was driven by its own amplifier, and the 26 kHz electronic crossover before the amplifier used steep filters. The experimenters found that the listeners' EEGs and their subjective ratings of the sound quality were affected by whether this "ultra-tweeter" was on or off,
even though the listeners explicitly denied that the reproduced sound was affected by the ultra-tweeter, and also denied, when presented with the ultrasonics alone, that any sound at all was being played.
From the fact that changes in subjects' EEGs "persist in the absence of high frequency stimulation," Oohashi and his colleagues infer that in audio comparisons, a substantial silent period is required between successive samples to avoid the second evaluation's being corrupted by "hangover" of reaction to the first.
The preprint gives photos of EEG results for only three of sixteen subjects. I hope that more will be published.
In a paper published in
Science, Lenhardt et al. report that "bone-conducted ultrasonic hearing has been found capable of supporting frequency discrimination and speech detection in normal, older hearing-impaired, and profoundly deaf human subjects."
[5] They speculate that the
saccule may be involved, this being "an otolithic organ that responds to acceleration and gravity and may be responsible for transduction of sound after destruction of the cochlea," and they further point out that the saccule has neural cross-connections with the cochlea.
[6]
Even if we assume that air-conducted ultrasound does not affect direct perception of live sound, it might still affect us indirectly through interfering with the recording process. Every recording engineer knows that speech sibilants (Figure 10), jangling key rings (Figure 15), and muted trumpets (Figures 1 to 3) can expose problems in recording equipment. If the problems come from energy below 20 kHz, then the recording engineer simply needs better equipment. But if the problems prove to come from the energy beyond 20 kHz, then what's needed is either filtering, which is difficult to carry out without sonically harmful side effects; or wider bandwidth in the entire recording chain, including the storage medium; or a combination of the two.
On the other hand, if the assumption of the previous paragraph be wrong — if it is determined that sound components beyond 20 kHz
do matter to human musical perception and pleasure — then for highest fidelity, the option of filtering would have to be rejected, and recording chains and storage media of wider bandwidth would be needed.