This might seem counter-intuitive, but my casual observation is that when I work (on film sound) with people who have some degree of hearing loss, they are generally more likely show dissatisfaction when hearing AAC, MP3, EAC3 etc than I or my peers are. Not sure about other codecs.
This sort of makes sense to me, as my understanding is that the content a lossy codec keeps is "chosen" based on models of "normal" hearing. So someone who fits the particular model perfectly will be less able to hear the artefacts than someone who's an outlier in terms of spectral and temporal masking; regardless of how good their hearing is in the real world. (right...?? or wrong?)
My current guess is that maybe because of the spectral masking assumptions used in these codecs, those with hearing loss do not conform to the model. By that I mean for example in very simplistic terms, if it's assumed that a strong signal at 13.0kHz will mask a weaker one at 12.9kHz but the listener can't hear 13.0kHz in the first place because they have a hole in their FR at that spot frequency, it will reveal the codec's hand at 12.9k for that listener. Whereas, for someone who hears the 13.0kHz, the presence (in PCM or lossless) or lack (in lossy) of the lower level content at 12.9kHz is indiscernible. Maybe?
I dunno. Maybe my observations are purely co-incidental! Does anyone have any thoughts, experience or research on hearing damage / aging vs lossy compression? Only out of curiosity...!
This sort of makes sense to me, as my understanding is that the content a lossy codec keeps is "chosen" based on models of "normal" hearing. So someone who fits the particular model perfectly will be less able to hear the artefacts than someone who's an outlier in terms of spectral and temporal masking; regardless of how good their hearing is in the real world. (right...?? or wrong?)
My current guess is that maybe because of the spectral masking assumptions used in these codecs, those with hearing loss do not conform to the model. By that I mean for example in very simplistic terms, if it's assumed that a strong signal at 13.0kHz will mask a weaker one at 12.9kHz but the listener can't hear 13.0kHz in the first place because they have a hole in their FR at that spot frequency, it will reveal the codec's hand at 12.9k for that listener. Whereas, for someone who hears the 13.0kHz, the presence (in PCM or lossless) or lack (in lossy) of the lower level content at 12.9kHz is indiscernible. Maybe?
I dunno. Maybe my observations are purely co-incidental! Does anyone have any thoughts, experience or research on hearing damage / aging vs lossy compression? Only out of curiosity...!