• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

An analysis of some tracks, unprocessed, off of CD.

j_j

Major Contributor
Audio Luminary
Technical Expert
Joined
Oct 10, 2017
Messages
2,267
Likes
4,758
Location
My kitchen or my listening room.
There are 3 plots in each image.

The first is a histogram (red) of the actual samples on the CD. Since the image isn't 65536 wide, the histogram plots largest and smallest in each bin, but the bins are calculated per individual level.

The first also includes an upsampled version (blue). Anything over +-1 is an intersample over, and is best described as "a bad thing".

The second is a histogram of a zwicker/fletcher loudness model. It ranges from zero (silent) to 400 (louder than (*&*(&(*). It is a good estimate of loudness, NOT spl. That's a different discussion.

The third is a measure of the flatness of the spectrum. A large negative number indicates a large spectral tilt, a small number closer to flat. Zero == white noise.

The very last plot is a plot of the entirety of a very large corpus of tracks, showing both histograms and extrema.

Some of the extrema are politely described as "very wrong".

Intersample overs make a lot of DAC's choke. Excess loudness sounds bad.

This shows clearly, I think, how 'make it loud' makes the CD, which is technically utterly superior to LP, sound a great deal worse than LP.

And it's all in production demands for "make it loud".

black.jpg
A rock track with lots of compression, but some dynamic range above.
corvus.jpg
The above is a reasonably produced LOUD rock track. Notice the lack of intersample overs, the presence of clipping, and the symmetric shape of the loudness histogram.
fnord.jpg
No, no, 1000 times no below. Intersample overs up the wazoo. TWO clipping levels on the positive side (WTF?)
The above (sorry, board won't let me add comments all the time??? ) is a reasonably produced pop track.
hcn.jpg
mize.jpg
Here is a very quiet, older classical recording (above) How it should be, although the gain on the ADC could be a bit higher.
wtf.jpg
The above is what I refer to as a WTF track. Intersample overs, examples of undithered gain adjustment. Just hard to explain.
total.jpg
 
Last edited:

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,368
Likes
234,386
Location
Seattle Area
index.php
The above is what I refer to as a WTF track. Intersample overs, examples of undithered gain adjustment. Just hard to explain.
WTF indeed. Does it really have distinct regular bins in its loudness distribution?
 
OP
j_j

j_j

Major Contributor
Audio Luminary
Technical Expert
Joined
Oct 10, 2017
Messages
2,267
Likes
4,758
Location
My kitchen or my listening room.
The loudness distribution is quantized to 401 bins, but the actual measure can have any value.

The nastiest thing in that track is the zero probabilities in the actual PCM amplitude distribution. They applied a small gain, just greater than 1, and did not bother to dither. That's about the only way to create the hair-comb effect you see in the amplitude distribution.

The loudness distribution, well, when everything else is so weird, who the (*&(*& knows.
 

jlo

Active Member
Joined
Aug 3, 2018
Messages
100
Likes
177
Hello JJ,

some questions about your analysis :
- in the level histogram of first image, the clipping (limiter ?) is at about +-0.9 and oversampled peaks are between 0.9 and 1.
I suppose that you did not process this track so maybe the level has been decreased after limiting ?

- in the level histogram of image nr6, the vertical lines represent "missing" digital values : you say it is due to some gain increase without dithering.
To replicate this, I generated a 1kHz sine at -20dBFS with audacity, got no missing codes, added 0.1dB with no dither, I got 80% missing codes...See hereunder.
I need now to think about it to find the explanation ! It is an interesting topic because many tracks on various CD show this behaviour with many missing digital codes.

1k-20-1k-20+01.png
 

restorer-john

Grand Contributor
Joined
Mar 1, 2018
Messages
12,579
Likes
38,280
Location
Gold Coast, Queensland, Australia
To replicate this, I generated a 1kHz sine at -20dBFS with audacity, got no missing codes, added 0.1dB with no dither, I got 80% missing codes...See hereunder.
I need now to think about it to find the explanation

Generate a 1001Hz tone @ 0dBFS/44.1KHz and a 1KHz tone @ 0dBFS/44.1KHz and report back your findings.

Also, why are you using 48KHz instead of 44.1KHz?
 
Last edited:

restorer-john

Grand Contributor
Joined
Mar 1, 2018
Messages
12,579
Likes
38,280
Location
Gold Coast, Queensland, Australia
Intersample overs make a lot of DAC's choke. Excess loudness sounds bad.

This shows clearly, I think, how 'make it loud' makes the CD, which is technically utterly superior to LP, sound a great deal worse than LP.

Bottom line is none of this is an issue if the recording is properly done in the first place. There is absolutely no need whatsoever to have inter sample overs at all. There never was. It's all a byproduct of pushing the overall level too close to clipping. Creating a compressed recording and then backing the whole track down by 0.01dB (like daft punk's random access memories) doesn't allow the headroom needed.

The engineers back in the 1970s recording with early digital recorders knew this, the early releases of compact discs knew this. All Benchmark are doing is trying to solve a problem that is really only a by-product of poor mastering in the first place.

It's yet another solution searching for a problem. If you have intersample errors, you already likely have a compressed, too high level recording in the first place with everything going on in the top few dB.
 

jlo

Active Member
Joined
Aug 3, 2018
Messages
100
Likes
177
Generate a 1001Hz tone @ 0dBFS/44.1KHz and a 1KHz tone @ 0dBFS/44.1KHz and report back your findings. Also, why are you using 48KHz instead of 44.1KHz?

Here is a 1001Hz @-20dBFS (44.1kHz) and same with +0.1dB gain : no missing codes
1001-20-1001-20+01.png


Here is a 1000Hz @0dBFS and same with +0.1dB gain : many missing codes for both signals

1000-0-1000-0+01.png
 

Guermantes

Senior Member
Joined
Feb 19, 2018
Messages
484
Likes
561
Location
Brisbane, Australia
Generate a 1001Hz tone @ 0dBFS/44.1KHz and a 1KHz tone @ 0dBFS/44.1KHz and report back your findings.

I take it this has to do with divisors of the sample rate using fewer quantisation values than non-integers?
 

DonH56

Master Contributor
Technical Expert
Forum Donor
Joined
Mar 15, 2016
Messages
7,835
Likes
16,497
Location
Monument, CO
Get a copy of IEEE Standard 1241 (I know that one well; 1658 is for DACs but I do not have it on hand -- 1241 includes test signals that will work for either) for a description of test signals to excite all codes. You need to be using relatively prime ratios otherwise some codes will be "skipped" by the test signal. Often just choosing an odd integer frequency value relative to the sampling rate will work.
 

jlo

Active Member
Joined
Aug 3, 2018
Messages
100
Likes
177
I understand that using a frequency non-integer to sampling should excite more codes (for CD, if I remember well, we used 998Hz on EIAJ tests).
But on the example I proposed above, the 1000Hz at -20dB had no missing code (?) but at -19.9dBFS it had 80% missing codes !
This difference is what I found surprising.
And I also find strange that some tracks on commercial CD's have plenty of missing codes (most of music tracks should excite all codes, no ?)
 
Last edited:

DonH56

Master Contributor
Technical Expert
Forum Donor
Joined
Mar 15, 2016
Messages
7,835
Likes
16,497
Location
Monument, CO
The codes you hit depend upon the amplitude, signal frequency, and sampling rate. The IEEE Standard defines a scheme to hit every code at least once using a given record length (sample size) and sampling rate.

Note below full-scale there will always be codes missing since you aren't hitting the largest values; -20 dBFS is 0.1 V/V so 90% of the codes could be missing. I do not understand how you define "missing" codes since anywhere around -20 dBFS a whole bunch of codes should be missing (the upper 20 dB).

I've never thought about if and when what codes would be touched on a commercial CD. You'd think it would hit pretty much all of them given the random'ish nature but not something I have ever tried to measure.
 

jlo

Active Member
Joined
Aug 3, 2018
Messages
100
Likes
177
Note below full-scale there will always be codes missing since you aren't hitting the largest values; -20 dBFS is 0.1 V/V so 90% of the codes could be missing. I do not understand how you define "missing" codes since anywhere around -20 dBFS a whole bunch of codes should be missing (the upper 20 dB).
When I say that some codes are missing, my calculation is between largest positive value and largest negative value of signal. If max peak signal is at -20dBFS, I do not count codes missing above -20dBFS !

Here are analysis of two well known and correctly mastered tracks : Tracy Chapman "Fast car" and Jennifer Warnes "Bird on a wire".
They show 33% and 8% missing codes in the histograms. It is JJ that pointed this problem long ago in a discussion at Hydrogenaud.io.
He may now have found some explanations.

Fast Car-Bird On A Wire.png
 
Last edited:
OP
j_j

j_j

Major Contributor
Audio Luminary
Technical Expert
Joined
Oct 10, 2017
Messages
2,267
Likes
4,758
Location
My kitchen or my listening room.
Bottom line is none of this is an issue if the recording is properly done in the first place. There is absolutely no need whatsoever to have inter sample overs at all. There never was. It's all a byproduct of pushing the overall level too close to clipping. Creating a compressed recording and then backing the whole track down by 0.01dB (like daft punk's random access memories) doesn't allow the headroom needed.

The engineers back in the 1970s recording with early digital recorders knew this, the early releases of compact discs knew this. All Benchmark are doing is trying to solve a problem that is really only a by-product of poor mastering in the first place.

It's yet another solution searching for a problem. If you have intersample errors, you already likely have a compressed, too high level recording in the first place with everything going on in the top few dB.


Well, yes, there is no need for these atrocities, indeed.

I will be giving an joint IEEE/AES talk on October 3 at DigiPen, in Redmond, on this data. A rant on "WHY DOES THIS HAPPEN" will be included.
 
OP
j_j

j_j

Major Contributor
Audio Luminary
Technical Expert
Joined
Oct 10, 2017
Messages
2,267
Likes
4,758
Location
My kitchen or my listening room.
Also, if you're using a sine wave, you must use one that hits every level over your analysis window. If, for instance, you use FS/4, you will only get either 2 or 3 levels, depending on sin or cos, or 4 levels using something in between.

I use a near-white noise source that is designed to hit every level when I test things like that. It's easy, you generate a ramp from -max to max step of 1 level. Then you scramble the order of the ramp using a standard maximal-scramble method. Then you feed it through things and see what comes out.

When you don't clip, it is in fact likely to have missing codes near min and max, that's simple probability working for you. When you see systematic, or periodic zeros, that's the sign of a non-dithered multiply.

And, yes, it is clear that one of the tracks was clipped INSIDE max range to keep the intersample waveform JUST under "over". Shaking my head at how that one was done, but it's better than not doing it.
 
OP
j_j

j_j

Major Contributor
Audio Luminary
Technical Expert
Joined
Oct 10, 2017
Messages
2,267
Likes
4,758
Location
My kitchen or my listening room.
Oh, and no, no tracks are processed. They're the original PCM as far as I know. Makes one wonder, does it not?
 

garbulky

Major Contributor
Joined
Feb 14, 2018
Messages
1,510
Likes
827
I will be giving an joint IEEE/AES talk on October 3 at DigiPen, in Redmond, on this data. A rant on "WHY DOES THIS HAPPEN" will be included.
Sorry I'm new here. It lists you as a technical expert, which sounds like you are by what you are saying. Who are you if you don't mind me asking?
Also, if anybody can explain in plain english what JJ is talking about, I would much appreciate it. What are missing codes? And how does a CD have incorrect loudness? Does this only happen when the sound is basically at clipping?
 
Top Bottom