You talk about samples every 5us (192kHz). That would not capture what the rat endured 0.0005us !
Not precisely. Yet pretty close under two conditions:
- The duration of pulse is much shorter than the period of characteristic frequency of a basilar membrane of the species. Characteristic frequency is usually close to the resonance frequency, yet can be strongly affected by other factors, such as the particulars of damping.
- The mechanical momentum transferred, proportional to the integral of the sound pressure over the duration of the pulse, is the same. So, given the four magnitudes of difference between the durations, we ought to lower the amplitude of impulse by four magnitudes.
There are more subtleties, considering transfer of mechanical momentum vs transfer of energy: if the system from the tympanic membrane to the inner hair cell were ideally rigid, only momentum would be transferred. That would give the equivalent SPL of 250 - 40 = 210 dB to reproduce the effect of that experiment in a rodent ear with the 5 microseconds pulse.
In reality ear structures are not fully rigid, because cells, membranes, and middle ear bone joints are elastic, so some transfer will happen in the form of energy. The energy transferred would be proportional to the integral of
square of the sound pressure over the duration of the pulse. For pure energy transfer, we would get an equivalent SPL of 250 - 80 = 170 dB.
The real-life equivalent SPL would be somewhere between 170 dB and 210 dB. Interestingly enough, the mid-point between those is 190 dB, in the vicinity of the maximum SPL that can be transferred through air by regular, non-shockwave sound waves: 194 dB. It is plausible to conclude that the mammal hearing system evolved to endure without irreversible damage the maximum SPL that it could encounter in nature.
What's the relevance and why would you think it is audible? (you claimed this) Simply because the outer haircells were shot and thus must have produced 'a loud sound' ?
A single half sinewave of a 2.5MHz signal ?
Perhaps an analogy would help. When a conventional Uranium-235 nuclear bomb explodes, nuclear fission process that generates the energy of the blast only lasts a microsecond. Secondary physical processes transform the energy released by the fission into what eventually reaches human ear as a loud sound - very loud if you are close enough.
Similarly, it is not the pulse itself that makes the sound, but what basilar membrane and other structures of the ear "do" with the mechanical momentum and energy delivered by the pulse. It is a perfectly valid question to ask what these structures do, on a qualitative level. The answers will be different depending on the characteristics of the pulse.
For instance, pulses with periods much longer than the period of basilar membrane characteristic frequency - let's say 1 Hz - are generally not heard, virtually regardless of amplitude, because cochlea employs a mechanism which hydraulically cancels them out, and thus inner hair cells don't react to it. Theoretically, there could be a mechanism in cochlea that cancels out the very short duration pulses as well.
As the experiment plainly shows, there isn't such a canceling mechanism, sufficiently effective for short duration pulses at the SPL applied. The cochlea still reacted, qualitatively, in a way characteristic to the one it uses for detecting sounds in the audible range.
Accounts of blast victims confirm that: subjectively, if a blast doesn't destroy the cochlea completely, the sensation is that of a very loud sound, followed by intense ringing in the ears (which also has neurophysiological explanation).
What's it have to do with complex music signals ?
Some genres of music contain many transients. Transients are heard when listening to such music live. If transients are not captured and reproduced with sufficient accuracy, the music doesn't sound natural, as some of the temporal cues allowing to separate perceptual "sound objects" are taken away.
Some (most?) commonly used recording formats don't capture the transients. Some (most?) commonly used sound systems don't reproduce them. That's a problem. MQA supposedly deals with part of this problem effectively, without falling back to uncompressed representations of transient-rich fragments of music, like other competing lossy compressing codecs do - this claim still needs to be verified.