For a sampled system with discrete levels, e.g. an ADC or a DAC, theory predicts (and measurements confirm) an SNR of about 6N+1.8 dB for an N-bit converter. This is assuming a sinusoidal input at full-scale (not clipped) and noise from quantization noise only. The derivation is not too bad but I'd have to look it up (I have it done several ways but my references -- and notes -- are not with me at the moment). SNR takes the single signal tone and compares it to all the noise; think of an FFT plot with a signal tone and compare that to the sum (actually root-sum-square) of the all the noise floor components at the bottom (the "grass"). The answer would be different if the input were not a sine wave, BTW.
Now, remember the SNR includes all the noise, all the "grass" at the bottom of the plot. Clearly, hopefully, each individual strand of grass must be much lower than 6N, or the resulting RSS value would be much larger than that. It turns out, again theoretically but empirically as well, that each individual strand (noise frequency bin in the FFT), is actually about 9N down. This derivation is not so straightforward, involving Bessel functions and other high-level math that makes my head hurt. Yes, I've done it, but it is more painful for a hairy-knuckled pea-brained engineer like myself.
HTH - Don
Article on sampling:
http://www.whatsbestforum.com/showthread.php?1209-Sampling-101