In another thread, there was some discussion about that the sample frequency should never change in a recording .
What you hear in a ” normal ” production is something thats in the beginning maybe was recorded at 96 KHz but has been thru SRC many times before if finally arrives at TIDAL or Spotify at 44,1 KHz .
One must remember that putting some reverb and compression on a drumkit ( always done ) in a mix demands resampling 2 times for the whole track ( or often 8 tracks for only the drumkit ) .
There is a lot of confusion about this .
Me , - I always try to record acoustical instruments at native sampling rate at 96 KHz , using only two microphones, and skipping the whole mixing process.
But - I still have to do one digital manipulation before the recording is finished and thats ” normalisation” of the recording .
In an acoustical recording , you always have about -10 dB as a margin for digital clipping . The recorded tracks will be about -10 dB as loud as a normal CD . At normalisation, you lift up the level to -1 dB . This is done in digital domain with 32 or 64 bit resolution in a DAW.
My experience with DAWs like Logic Pro and Audacity is that this simple digital manipulation ( in Logic Pro X its done at 64 bit internaly, 32 bit in audacity ) can be heard as a little less natural sound , unfortunately.
The -10 dB 96 KHz 24 bit recording will sound a bit better than the finnished -1 dB recording . The -10 dB recording will sound more natural and with less digital ” glare” .
This is a sad thing - because when I started recording , I always thought this was entirely inaudible wich is not true.
————————
What do you, hobby-sound engineers or professionals working in studios have to say about this ?
In a purist perspective - you are absolutely right. But if you record something different than only classical instruments with two channels without effects , you have to mix it in a software program ( DAW ) like Logic , using reverb plug ins ( often 48 KHz ) or compression ( 48 KHz ) and so on….“One” should never do that. Everything should be recorded at the maximum available resolution and never changed. There’s no reason to do that. The only time it may be changed is at mastering as per the medium the track is distributed as.
What you hear in a ” normal ” production is something thats in the beginning maybe was recorded at 96 KHz but has been thru SRC many times before if finally arrives at TIDAL or Spotify at 44,1 KHz .
One must remember that putting some reverb and compression on a drumkit ( always done ) in a mix demands resampling 2 times for the whole track ( or often 8 tracks for only the drumkit ) .
There is a lot of confusion about this .
Me , - I always try to record acoustical instruments at native sampling rate at 96 KHz , using only two microphones, and skipping the whole mixing process.
But - I still have to do one digital manipulation before the recording is finished and thats ” normalisation” of the recording .
In an acoustical recording , you always have about -10 dB as a margin for digital clipping . The recorded tracks will be about -10 dB as loud as a normal CD . At normalisation, you lift up the level to -1 dB . This is done in digital domain with 32 or 64 bit resolution in a DAW.
My experience with DAWs like Logic Pro and Audacity is that this simple digital manipulation ( in Logic Pro X its done at 64 bit internaly, 32 bit in audacity ) can be heard as a little less natural sound , unfortunately.
The -10 dB 96 KHz 24 bit recording will sound a bit better than the finnished -1 dB recording . The -10 dB recording will sound more natural and with less digital ” glare” .
This is a sad thing - because when I started recording , I always thought this was entirely inaudible wich is not true.
————————
What do you, hobby-sound engineers or professionals working in studios have to say about this ?
Last edited: