I think I understood your argument, but please tell me if I didn't. Since you used dBFS (which represents sample values), I answered with sample values in mind. I'm sorry if that's not what you meant. By the way, I edited my post too because I thought I might have been unclear. I'll try to avoid doing this in the future
I'm well aware that oversampling can help identify potential ISPs. In fact, it's one of the things recommended by the ITU BS.1770-4 specification. Many engineers now use
dBTP instead of dBFS to see how an artificial interpolation might affect the resulting output. The problem is that the specification is not very precise. In other words, TP meters vary from one another, and this is a potential source of confusion.
Taking care of ISPs at the production stage is fine to me personally, and a good idea. Again, I do my best to deliver files with sufficient headroom and 'true-peak compliance', if I may say. I'm not at all against that.
The problem is how to deal with already released music that's not going to be remastered any time soon?
That's why I started this thread. By establishing an accurate test for this, we could:
1) educate listeners and enthusiasts about digital audio principles
2) raise awareness of ISPs and how to avoid them at any stage (be it production or listening)
3) encourage manufacturers to play their part in helping listeners enjoy less-than-ideal audio in the best possible way
Is it really so unreasonable?