MRC01
Major Contributor
Learning more about how DACs work, I read about the Whittaker-Shannon interpolation formula. I created a spreadsheet implementing that formula, so I could plug in some test sampling points and see the analog wave it constructs. I realized that each sampling point has a diminishing ripple that affects all its adjacent sampling points; to a decreasing extent as you get further away. This led to 2 realizations:
1. Any DAC that uses this method must read ahead. The spread of the ripple is symmetric in time, so to compute the amplitude of the analog wave at NOW, you need to know what are the sampling points in the future (and in the past).
2. Any DAC that uses this method must have considerable processing power. Because each sampling point affects the wave for thousands of sampling points both before and after it, to determine the analog amplitude at NOW, you must consider not just THIS sampling point, but also the influence of a few thousand samples both before and after this one.
Sanity check: for 16 bit samples, in the scenario of a full scale 0 dB sample, how many samples away does the ripple from this sample drop below the noise floor (-96 dB)? From the spreadsheet one can compute this to be about 22,000 samples. Thus a "perfect" DAC at 16 bit resolution must compute the influence of 44,000 samples for each sampling point (22,000 before & after) in order to ensure the amplitude at this point properly includes all ripple effects above the noise floor. I suspect it's only coincidence this is the CD sampling rate. To do this, the DAC must read ahead half a second, buffer 1 second of data, compute the sinc-t formula for all 44,000 samples in the buffer, each at the current sample time, and sum them.
It appears this ripple is where the Gibbs phenomenon comes from. It's neat to understand what causes this. It also shows that the ripple frequency is Nyquist, so it should be inaudible.
Does oversampling raise this Gibbs ripple frequency? It would seem that it does, but virtually all DACs oversample, and the analog waves I see from CD sources always show the Gibbs ripple at the same frequency.
I assumed DACs that have a standard "sharp" filter are using this method for DA reconstruction. But now that I see it requires so much read-ahead and processing power, I question whether this could be true. Or, perhaps they're estimating it with just a few samples before and after the current one, reducing the computational load, but having less than theoretically ideal performance.
This site won't let me upload the spreadsheet, but here's a picture. If there's a way to post the actual spreadsheet I'd appreciate any review of my formulas to ensure I got it right.
PS: I've also been reading about how R2R and DS DACs work. It seems the above sinc(t) stuff is only theoretical, not actually used. In case anyone is interested, here's the best explanation of DS I've found: https://www.beis.de/Elektronik/DeltaSigma/DeltaSigma.html. At first read I only understand half of it but a bit more might sink in over time.
1. Any DAC that uses this method must read ahead. The spread of the ripple is symmetric in time, so to compute the amplitude of the analog wave at NOW, you need to know what are the sampling points in the future (and in the past).
2. Any DAC that uses this method must have considerable processing power. Because each sampling point affects the wave for thousands of sampling points both before and after it, to determine the analog amplitude at NOW, you must consider not just THIS sampling point, but also the influence of a few thousand samples both before and after this one.
Sanity check: for 16 bit samples, in the scenario of a full scale 0 dB sample, how many samples away does the ripple from this sample drop below the noise floor (-96 dB)? From the spreadsheet one can compute this to be about 22,000 samples. Thus a "perfect" DAC at 16 bit resolution must compute the influence of 44,000 samples for each sampling point (22,000 before & after) in order to ensure the amplitude at this point properly includes all ripple effects above the noise floor. I suspect it's only coincidence this is the CD sampling rate. To do this, the DAC must read ahead half a second, buffer 1 second of data, compute the sinc-t formula for all 44,000 samples in the buffer, each at the current sample time, and sum them.
It appears this ripple is where the Gibbs phenomenon comes from. It's neat to understand what causes this. It also shows that the ripple frequency is Nyquist, so it should be inaudible.
Does oversampling raise this Gibbs ripple frequency? It would seem that it does, but virtually all DACs oversample, and the analog waves I see from CD sources always show the Gibbs ripple at the same frequency.
I assumed DACs that have a standard "sharp" filter are using this method for DA reconstruction. But now that I see it requires so much read-ahead and processing power, I question whether this could be true. Or, perhaps they're estimating it with just a few samples before and after the current one, reducing the computational load, but having less than theoretically ideal performance.
This site won't let me upload the spreadsheet, but here's a picture. If there's a way to post the actual spreadsheet I'd appreciate any review of my formulas to ensure I got it right.
PS: I've also been reading about how R2R and DS DACs work. It seems the above sinc(t) stuff is only theoretical, not actually used. In case anyone is interested, here's the best explanation of DS I've found: https://www.beis.de/Elektronik/DeltaSigma/DeltaSigma.html. At first read I only understand half of it but a bit more might sink in over time.
Last edited: