• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

dallasjustice

Major Contributor
Joined
Feb 28, 2016
Messages
1,270
Likes
907
Location
Dallas, Texas
There's been a lot of complaining lately about modern recordings. It seems to me the most legitimate of these are those complaints directed at severely clipped audio. Most audiophiles don't listen to this kind of music. I listen to plenty of rock, electronic, dance and metal. I know that much of this music is dynamically compressed and clipped.

It's been my argument that this type of music was meant to be heard at a loud level. However, I want to have the most enjoyable experience at home and still listen loud. So why not take matters in to my own hands? Instead of complaining, I did something about it!!

Perfect Declipper is a .VST plugin which reduces hard limiting distortions and increases dynamic range where needed.

I downloaded Perfect Declipper and installed it in Jriver.
http://www.perfectdeclipper.com/download/

Yes, I paid 100 euros. I know that's a lot of money for software. But I was hoping it would really improve my listening experience. So far it has. I think it works very well for almost all types of low DR/clipped music. If you listen to these type of recordings, I'd suggest you check this shit out.

Btw, here's a comparison I did in Audacity. I first recorded these clips using Jriver to Reaper DAW over Lynx Hilo ASIO. The recording chain is all digital; no conversions to analog.
Screenshot (1).png


I lined up the same track (Awolnation’s “Bad Wolf”) using Audacity; one track was run through perfect declipper and the other was not. Below are the .wav files if you want to listen to the difference. Notice Perfect Declipper attenuates -6db. So, you need to adjust levels to really compare the two. I think you'll notice a nice improvement in a decent playback system.
https://www.dropbox.com/s/hbcu0njjvplth96/01-180104_1634.wav?dl=0
https://www.dropbox.com/s/reqzxhz0ccita72/01-180104_1636.wav?dl=0
 
Last edited:

fas42

Major Contributor
Joined
Mar 21, 2016
Messages
2,818
Likes
191
Location
Australia
Now, that's what I call flat-tops!! Vicious clipping, absolutely zero attempt to soften the over values happening - a lot of copy and paste, and clever guessing to sort this one ...
 
OP
dallasjustice

dallasjustice

Major Contributor
Joined
Feb 28, 2016
Messages
1,270
Likes
907
Location
Dallas, Texas
I’m leaving it on for everything. My wife didn’t know what I did and she said “this is the best you speaker’s have sounded.” My wife and kids love to stream the newest pop music. Declipping that kind of music is a big improvement, IMO.
 

DonH56

Master Contributor
Technical Expert
Forum Donor
Joined
Mar 15, 2016
Messages
7,880
Likes
16,667
Location
Monument, CO
That's pretty cool. Wonder what algorithm they use to predict the unclipped waveforms?
 

Digital_Thor

Senior Member
Joined
Mar 24, 2018
Messages
385
Likes
334
Location
Denmark
is there any way to use a declipping program to permanently declip your FLAC collection? - if needed of course :)
 

RayDunzl

Grand Contributor
Central Scrutinizer
Joined
Mar 9, 2016
Messages
13,247
Likes
17,162
Location
Riverview FL
Wonder what algorithm they use to predict the unclipped waveforms?

I don't know...

Here's the longest clip and repair... 51 samples, 0.001156 ms

1527405433452.png

Here, the declipper itself ran out of room...

1527405694839.png
 

bennetng

Major Contributor
Joined
Nov 15, 2017
Messages
1,634
Likes
1,693
It also changed something it should not change, which means the declipper itself can introduce additional distortion.

diff.png


"Loudness war" being a generalized term, can be achieved by many different means. Individual audio tracks can have different dynamic processing before stereo mixdown, another compressor/limiter/saturation plugins can also be added in the master bus, iZotope plugins work differently from Waves, different mastering engineers can use different parameters, all parameters can be automated, which means they can be changed over time instead of being static in the whole song.

Declippers could be useful for simple mistakes like clipping introduced by careless gain staging or naive use of resamplers, or intersample overs, in these cases waveform are simply clipped. But in the ugly world of loudness war, a lot of intentional attempts were being made to make the final product "louder", declippers could make a difference, but not not necessarily a better one, it's all subjective.

Instead of passive, negative and unreliable attempts to restore dynamics, the introduction of loudness normalization in various streaming platforms discourages loudness war at the very beginning. This blog routinely post latest news and trends of such things:

http://productionadvice.co.uk/blog/
 

RayDunzl

Grand Contributor
Central Scrutinizer
Joined
Mar 9, 2016
Messages
13,247
Likes
17,162
Location
Riverview FL
Good eye on that mid-level difference...

I looked for some, didn't find any.
 

sergeauckland

Major Contributor
Forum Donor
Joined
Mar 16, 2016
Messages
3,458
Likes
9,151
Location
Suffolk UK
I think the problem with any declipper is that once a signal has been clipped, any information about what happened 'above' the clipping is lost, and any waveform reconstruction is an estimate. If the level goes up and down between the start and end of the clipping, that information no longer exists in the final version, so there's nothing there for an algorithm to work with.

The subjective result may well be an improvement, and for that, perhaps, the effort is worth it, but as a means of undoing the harm done to the signal in earlier processing, nothing can restore that.

S.
 

Timbo2

Senior Member
Joined
Feb 27, 2018
Messages
497
Likes
396
Location
USA
The loudness war may be over already:
Source: http://ismir2015.uma.es/articles/136_Paper.pdf

I hope you are right! I scanned this paper several times, but still couldn’t pick up where they sourced actual tracks they analyzed. For example, did the use original CD or possibly a streaming service which will add possible compression. They discussed Allmusic, but I read that purpose for distinguishing genre.
 

svart-hvitt

Major Contributor
Joined
Aug 31, 2017
Messages
2,375
Likes
1,253
I hope you are right! I scanned this paper several times, but still couldn’t pick up where they sourced actual tracks they analyzed. For example, did the use original CD or possibly a streaming service which will add possible compression. They discussed Allmusic, but I read that purpose for distinguishing genre.

@Timbo2 , the source of their material is to be found in the article, quote:

«2. METHOD
2.1 Music corpus
The music corpus we rely on is a revision and extension of the corpus used in [6]. It includes 7200 tracks released between 1967 and 2014, 150 tracks per year. Track selection is based on Besteveralbums.com, a review aggregator. For each year, we choose the albums with the best ratings. If a given artist is the author of more than three well-rated albums, we select the artist’s complete discography. While this method does not lead to a random sampling, it ensures that the corpus is based on music that is popular. We choose to start the corpus at the end of the sixties because these years can be considered as the advent of the contemporary pop/rock era, characterized by the creative use of the recording studio [8, p. 157] along with mass media availability [20].»

Footnote/reference 6 above is:

[6] E. Deruty and D. Tardieu, “About Dynamic Processing in Mainstream Music,” J. Audio Eng. Soc. 62(1/2), pp. 42-55, Jan. 2014.

When I go to this article, «About Dynamic Processing in Mainstream Music», I find this:

«1. CORPUS
The music corpus we use in this article is made of 4500 tracks released between 1967 and 2011, each year corre- sponding to 100 tracks. These tracks are taken from albums that got considerable “commercial” and/or “critical” suc- cess according to two different online sources [27],[28], and chart archives from the US Billboard [29]. While this method of selection does not lead to a random sample, it ensures that the corpus is based on music that is broadly listened to. Each album from the corpus was verified as featuring a mastering that could realistically be performed at the time of the initial release – excluding as a remastered edition, for instance, obvious digital brickwall limiting on a 1970 album. We choose to start the corpus at the end of the sixties, for the reason that we consider these years as the advent of the contemporary pop/rock era, characterized with the creative use of the recording studio [3] (p. 157) along with mass media availability [30].
Some experiments we perform in this article require the use of a specific sub-corpus, made from tracks that were not processed with a brickwall look-ahead limiter. From our main corpus, we randomly choose 10 songs from each year between 1969 and 1989. As no brickwall limiters were used in the studio before the beginning of the 1990s [3](pp. 279– 280), and as the tracks were manually checked as not being remastered, this ensures that no brickwall limiting was ap- plied on any of the 210 resulting tracks».

In other words, the database seems to be a laborious one, based on manual work and judgments along the way.

This is, by the way, often the reason why we have so little empirical research; scientists are lazy, as are most people. So people fall in love with an idea instead (for example «I hate the loudness war!!!»), without taking the time to check the data (which would tell the emotional audio enthusast that the war is about to end and «peace» is on the horizon).

Empirical scientists are often the real heroes; without empirical data, there is no science. Which is not to suggest that measurement without theory is sufficient for a robust research program.
 
Last edited:

L5730

Addicted to Fun and Learning
Joined
Oct 6, 2018
Messages
670
Likes
439
Location
East of England
I've read the articles about the loudness war already being won, and over and that stuff should be coming out less loud from now on - nope, I am not seeing it, except for the few rarer examples.

Rod Stewart, Paloma Faith, MUSE, lots of different artists old and new, their new material is just being slammed out at -6 LUFS! That is, the program material is sitting around 6 dB from maximum possible signal level. A limiter is bunged on at -0.1 dBFS and everything is rammed into it.
I guess, as a technical achievement, being able to crush music into that small a number of bits is quite impressive - but these sound horrible. Drums do not sound like drum kits anymore, it's all full of fake bass and strange crunchy synth sounds, which may or maynot have come from a synth.

I've used iZotope DeClip on some material and tried AB listening tests on it. Very rarely did I think there was an improvement, most of the time I found artefacts. I had a couple of other folks listen in too, with the same opinions as mine. I appreciate there may have been some bias as it wasn't completely blind, but I would think we'd prefer one version over the other relatively consistently - not the case.

To reword the phrase from South Park - "Oh my God, they killed music! You B******s!"
 

JJB70

Major Contributor
Forum Donor
Joined
Aug 17, 2018
Messages
2,905
Likes
6,151
Location
Singapore
I think that the crappie mastering and compression isn't only about the old loudness war but also to optimise music for the sort of audio equipment used by most people now. There may also be an element of people now being conditioned to prefer it.
 

L5730

Addicted to Fun and Learning
Joined
Oct 6, 2018
Messages
670
Likes
439
Location
East of England
In my limited experience, I don't think there is much need (in mastering) to squish anything to beyond -10 LUFS, let alone beyond -13 LUFS. Much past that loudness level and there are sonic trade-offs from having to push the music harder.
Now, the issue is that it's not the process of mastering that is doing this so much. The whole mixing process is arguably wrong. I suppose it is a hard argument there, because certain dynamic tools are used to sculpt and colour the sound or each mix element, and the overall, a particular way. Sometimes gritty distortion is added to drums, for example, or some filtering that makes the guitars on their own sound like garbage, but have a certain impact in the full mix.

Speaking in terms of flat-topped waveforms, or more accurately described, multiple (>5) samples at the exact same value at the maximum sample value of the program material.

I honestly believe that what is being submitted to mastering is something so far gone, that it should really be rejected. Of course, folks want to still get paid, not just keep rejecting everything. Mastering should be picking up clipping though, and sending it back as broken. This used to happen in the earlier days of CD manufacturing, at the plant, they would detect overs and contact the (pre)mastering engineer to see how they wanted to proceed. I guess shoving a brickwall limiter at -0.1 dBFS solves that - you can any s**t you want and it won't flag up.

The argument that it is how someone wanted it to sound - no, I can't accept that. There are too many variables in what the DAC will do with such a digital signal. How will the analogue wave be reconstructed when actually converted to analogue voltage?

It's broken and it's defective, and we should have to accept it.
They wonder why CD sales are in the toilet and do nothing about producing a high quality product.
Anyone who wants good sounding CDs of older material, generally sources no longer in production CDs on the second hand market. It's getting rarer to see re-mastered CDs actually improve on anything from the old ones which used, by today's standards, lousy converters. Maybe some of those sound a bit better on the portable music player or in the car, but mostly not, just a hyped up not-quite-right loud version of something done better 30+ years ago.
 

infinitesymphony

Major Contributor
Joined
Nov 21, 2018
Messages
1,072
Likes
1,809
Reminds me of this note from John Siau (Benchmark):

"Benchmark reserves the top 3.5 dB of the ES9018 for headroom to prevent clipping of intersample peaks. These peaks can occur many times per second and they cause clipping and overloads in digital processing systems that lack digital headroom. Digital systems need at least 3.01 dB of headroom above 0 dBFS to prevent this problem."

https://benchmarkmedia.com/blogs/application_notes/inside-the-dac2-part-2-digital-processing
 

JJB70

Major Contributor
Forum Donor
Joined
Aug 17, 2018
Messages
2,905
Likes
6,151
Location
Singapore
This thread is illustrating why I think arguments over formats, sampling frequency etc are really missing the point. Rubbish in = rubbish out and the state of many recordings probably wastes high quality MP3, never mind CD and high res.
 

bennetng

Major Contributor
Joined
Nov 15, 2017
Messages
1,634
Likes
1,693
Reminds me of this note from John Siau (Benchmark):

"Benchmark reserves the top 3.5 dB of the ES9018 for headroom to prevent clipping of intersample peaks. These peaks can occur many times per second and they cause clipping and overloads in digital processing systems that lack digital headroom. Digital systems need at least 3.01 dB of headroom above 0 dBFS to prevent this problem."

https://benchmarkmedia.com/blogs/application_notes/inside-the-dac2-part-2-digital-processing
While intersample overloading is a side effect of loudness war, it is rarely the main reason why loudness war songs sound bad. Also this issue can be easily avoided without using a Benchmark DAC. I made quite a number of posts to talk about this issue. IMO Benchmark's article is more or less marketing.

https://www.audiosciencereview.com/...ation-audible-intersample-clipping-test.2231/
https://www.audiosciencereview.com/forum/index.php?threads/intersample-over-test.3730/#post-89453
https://www.audiosciencereview.com/...nd-dacs-with-volume-controls.5432/post-121344
 

tmtomh

Major Contributor
Forum Donor
Joined
Aug 14, 2018
Messages
2,731
Likes
7,993
My $.02, based on a good deal of experience trying to de-clip albums, is that software de-clippers can be extremely useful, but they are hit or miss, and the hit-or-miss is not the fault of the software. It really depends on how the particular musical source was mixed and mastered.

De-clippers work best when signifiant digital limiting has been applied at the final mastering stage, after the multitrack have been mixed down to a two-track stereo file. Decent de-clippers do a surprisingly good job of restoring clipped/limited peaks. I don't know how accurate the restored peaks are technically, but based just on listening they certainly sound fine - no audible phase issues, no strange distortion, and so on.

The real trick/issue with de-clippers is the threshold level: The volume level at which you set the de-clipper to activate. For example, a -2dB threshold means that any part of the waveform (or any peak, at any rate) below -2.0dB gets left alone, while anything at -2.0dB or louder gets de-clipped/reconstructed.

The problem comes when too much compression is baked into the mix. In such situations, there is no de-clipping threshold that will produce a good-sounding result. If you use a conservative threshold - say, -0.5dB or -1dB - the result will not sound (or look) significantly different than the original: the waveform will still look buzzcut, except with just a few "stray hairs" sticking out, meaning the de-clipper restored a small number of peaks but left the overall compression unchanged. Conversely, if you get more aggressive with the threshold - say -4.0dB - you can produce a much more natural-looking waveform and a significantly different-sounding result. But the sonic changes will not be good - in exchange for increased dynamic range, you'll get a loss of bass impact, and sometimes a loss of transient snap, and the frequency/EQ balance will be noticeably altered (again not for the better). Even cranking up the volume to try to compensate, the result will sound dull and lifeless.

Sometimes, however, you get lucky and the de-clipper can work quite well. Ironically it works best when the mastering is worse - when the mix is not super-compressed but the mastering engineer has ridden the digital limiter really hard during the final mastering stage. In these cases, most of the dynamic compression is up there at the high peak levels, and so you can use a conservative threshold of -1.5dB or less and greatly restore dynamics while leaving the punch and essential frequency balance of the mix intact.

Finally, I would strongly recommend against using a de-clipper on borderline sources. A DR5, 6, or 7 album? Sure, have at it - usually the thing is unlistenable as-is so you have nothing to lose. But a DR8 or 9 album that's otherwise well-recorded and mixed? In my experience a de-clipper probably isn't going to help much, and it might make things sound worse even if it creates a result with a higher DR reading. In those cases you're better off just turning down your volume knob a little during playback.
 

Snarfie

Major Contributor
Forum Donor
Joined
Apr 30, 2018
Messages
1,183
Likes
932
Location
Netherlands
There's been a lot of complaining lately about modern recordings. It seems to me the most legitimate of these are those complaints directed at severely clipped audio. Most audiophiles don't listen to this kind of music. I listen to plenty of rock, electronic, dance and metal. I know that much of this music is dynamically compressed and clipped.

It's been my argument that this type of music was meant to be heard at a loud level. However, I want to have the most enjoyable experience at home and still listen loud. So why not take matters in to my own hands? Instead of complaining, I did something about it!!

Perfect Declipper is a .VST plugin which reduces hard limiting distortions and increases dynamic range where needed.

I downloaded Perfect Declipper and installed it in Jriver.
http://www.perfectdeclipper.com/download/

Yes, I paid 100 euros. I know that's a lot of money for software. But I was hoping it would really improve my listening experience. So far it has. I think it works very well for almost all types of low DR/clipped music. If you listen to these type of recordings, I'd suggest you check this shit out.

Btw, here's a comparison I did in Audacity. I first recorded these clips using Jriver to Reaper DAW over Lynx Hilo ASIO. The recording chain is all digital; no conversions to analog.
View attachment 9966

I lined up the same track (Awolnation’s “Bad Wolf”) using Audacity; one track was run through perfect declipper and the other was not. Below are the .wav files if you want to listen to the difference. Notice Perfect Declipper attenuates -6db. So, you need to adjust levels to really compare the two. I think you'll notice a nice improvement in a decent playback system.
https://www.dropbox.com/s/hbcu0njjvplth96/01-180104_1634.wav?dl=0
https://www.dropbox.com/s/reqzxhz0ccita72/01-180104_1636.wav?dl=0
Very interesting how long does it takes to do an conversion/declipping lets say per minute.
 
Top Bottom