(This is going to be a fairly long post so I'll write a little bit at a time - adding updates when I've done some more research. Please do leave constructive comments in the thread for how you'd like to see it develop.)
DACs in audio applications really hit their stride when the CD was introduced back in 1983. Prior to that they only appeared in low volume professional products as digital recording technology existed prior to CD. I'm going to focus on the evolution of DACs in mainly the consumer products from Philips as detailed information from Sony (CD's other inventor) isn't so easy to find.
The two first generation CD players were Sony's CDP-101 and Philips' CD100. They took rather different approaches to implementing the digital-analog conversion stages but both used bipolar (the semiconductor process) DAC technology, what we now call 'multibit'. In Sony's case the DAC chip used was the CX20017 which was a single 16bit DAC time-multiplexed to create the L and R stereo signals. I was only able to find a single page summary datasheet of this part - it consumed around 2W from a 10V (total) analog supply.
Philips on the other hand didn't have a 16bit DAC chip available at launch, having designed a part based on the initial CD specification of 14bits. This DAC chip was the TDA1540 and its datasheet is still available - it gives some insight into how the designers solved the considerable challenge of keeping the ratios between the bit weights constant over process variations and time. The solution here is a hybrid of current divisions by transistors in combination with current division by switches. In later designs this switching architecture (unique to Philips as far as I'm aware) was termed 'Dynamic Element Matching' (DEM for short).
A diversion is in order here to mention the major difficulty of designing a DAC. D/A conversion is done from binary (the usual digital representation of numbers) into an output current by summing contributions from current sources weighted in a succession of 2:1 ratios. This ratio does need to be extremely precise because for a 16bit DAC its applied 15 times from the MSB (most significant bit) to create the current needed by the LSB (least significant bit). The LSB's weight is 32,768 times smaller than the MSB - even a small error in the factor 2 applied between bits can get compounded when repeated 15 times.
DACs in audio applications really hit their stride when the CD was introduced back in 1983. Prior to that they only appeared in low volume professional products as digital recording technology existed prior to CD. I'm going to focus on the evolution of DACs in mainly the consumer products from Philips as detailed information from Sony (CD's other inventor) isn't so easy to find.
The two first generation CD players were Sony's CDP-101 and Philips' CD100. They took rather different approaches to implementing the digital-analog conversion stages but both used bipolar (the semiconductor process) DAC technology, what we now call 'multibit'. In Sony's case the DAC chip used was the CX20017 which was a single 16bit DAC time-multiplexed to create the L and R stereo signals. I was only able to find a single page summary datasheet of this part - it consumed around 2W from a 10V (total) analog supply.
Philips on the other hand didn't have a 16bit DAC chip available at launch, having designed a part based on the initial CD specification of 14bits. This DAC chip was the TDA1540 and its datasheet is still available - it gives some insight into how the designers solved the considerable challenge of keeping the ratios between the bit weights constant over process variations and time. The solution here is a hybrid of current divisions by transistors in combination with current division by switches. In later designs this switching architecture (unique to Philips as far as I'm aware) was termed 'Dynamic Element Matching' (DEM for short).
A diversion is in order here to mention the major difficulty of designing a DAC. D/A conversion is done from binary (the usual digital representation of numbers) into an output current by summing contributions from current sources weighted in a succession of 2:1 ratios. This ratio does need to be extremely precise because for a 16bit DAC its applied 15 times from the MSB (most significant bit) to create the current needed by the LSB (least significant bit). The LSB's weight is 32,768 times smaller than the MSB - even a small error in the factor 2 applied between bits can get compounded when repeated 15 times.
Last edited: