I think that's an important point to think about though. If distortion only becomes audible to the human ear at 30%(maybe this is why they landed on this figure for the CEA2010A spec?), then 10% distortion vs 5% vs 0.5% distortion shouldn't really matter other than for academic purposes. In some ways, it seems similar to me to the people chasing perfect SINAD in there DACs well beyond the limits of human hearing.
BTW, I don't know of any studies in regards to the audible thresholds of distortion at 10Hz, 16Hz, 20Hz, etc, but maybe others that are more knowledgeable can help with that. I pulled that 30% figure from reading
this article , but I was to cheap to pay the $90 to download the actual spec, so maybe that's not exactly correct. It looks like there are actually two specs(A and B). I think the B is a little more accurate in regards to human hearing(ie it allows more distortion before test fail at lower frequencies), but I'm not certain of that either.