• WANTED: Happy members who like to discuss audio and other topics related to our interest. Desire to learn and share knowledge of science required. There are many reviews of audio hardware and expert members to help answer your questions. Click here to have your audio equipment measured for free!

A brief history of audio DACs

Opus111

Addicted to Fun and Learning
Joined
Mar 2, 2016
Messages
666
Likes
38
Location
Zhejiang
(This is going to be a fairly long post so I'll write a little bit at a time - adding updates when I've done some more research. Please do leave constructive comments in the thread for how you'd like to see it develop.)

DACs in audio applications really hit their stride when the CD was introduced back in 1983. Prior to that they only appeared in low volume professional products as digital recording technology existed prior to CD. I'm going to focus on the evolution of DACs in mainly the consumer products from Philips as detailed information from Sony (CD's other inventor) isn't so easy to find.

The two first generation CD players were Sony's CDP-101 and Philips' CD100. They took rather different approaches to implementing the digital-analog conversion stages but both used bipolar (the semiconductor process) DAC technology, what we now call 'multibit'. In Sony's case the DAC chip used was the CX20017 which was a single 16bit DAC time-multiplexed to create the L and R stereo signals. I was only able to find a single page summary datasheet of this part - it consumed around 2W from a 10V (total) analog supply.

Philips on the other hand didn't have a 16bit DAC chip available at launch, having designed a part based on the initial CD specification of 14bits. This DAC chip was the TDA1540 and its datasheet is still available - it gives some insight into how the designers solved the considerable challenge of keeping the ratios between the bit weights constant over process variations and time. The solution here is a hybrid of current divisions by transistors in combination with current division by switches. In later designs this switching architecture (unique to Philips as far as I'm aware) was termed 'Dynamic Element Matching' (DEM for short).

A diversion is in order here to mention the major difficulty of designing a DAC. D/A conversion is done from binary (the usual digital representation of numbers) into an output current by summing contributions from current sources weighted in a succession of 2:1 ratios. This ratio does need to be extremely precise because for a 16bit DAC its applied 15 times from the MSB (most significant bit) to create the current needed by the LSB (least significant bit). The LSB's weight is 32,768 times smaller than the MSB - even a small error in the factor 2 applied between bits can get compounded when repeated 15 times.
 
Last edited:

NorthSky

Major Contributor
Joined
Feb 28, 2016
Messages
4,998
Likes
942
Location
Canada West Coast/Vancouver Island/Victoria area
Me too. :)
 

DonH56

Master Contributor
Technical Expert
Forum Donor
Joined
Mar 15, 2016
Messages
7,877
Likes
16,652
Location
Monument, CO
Dynamic element matching is a way to improve linearity by better matching current sources (sinks, actually, much of the time). Other architecture choices, such as the number of unary and binary weighted cells and switching architecture, may be influenced by but are distinct from dynamic element matching (or trimming, or other schemes to improve element matching). I have a text at home that describes DEM pretty well, and have implemented it once or twice but the headroom requirements are severe in the original design (and still a challenge today).

Delta and delta-sigma converters were developed to take advantage of process advancements that favored high transistor count digital circuits over precision analog and enable circuits less sensitive to PVT (process, voltage, temperature) and other variations that impact analog circuits. Early single-bit delta-sigma designs suffered from issues with linearity (it is not true that a "1-bit" ADC or DAC has no high-precision requirements), noise, and tones with LF signals caused by their relatively simple filters and such. Those issues have largely been resolved, but there are some intrinsic pros and cons of any architecture.

Which I am sure Opus111 will delve into without my babbling and interruptions, looking forward to it!
 

Phelonious Ponk

Addicted to Fun and Learning
Joined
Feb 26, 2016
Messages
859
Likes
215
This should be good. Looking forward to the challenges along the way, how they were addressed and to what result. I'll struggle to follow at times but it should be worth the effort.

Tim
 
OP
Opus111

Opus111

Addicted to Fun and Learning
Joined
Mar 2, 2016
Messages
666
Likes
38
Location
Zhejiang
Thanks Don for the input - I wasn't planning on going very deep into the technical issues regarding DEM, you're much better placed to do that as you've gotten your hands dirty with it (so to speak) whereas I'm just an end user and reader of the papers.
You've successfully written a slightly later part of my text about D-S (or S-D) converters there too, great :)
 

DonH56

Master Contributor
Technical Expert
Forum Donor
Joined
Mar 15, 2016
Messages
7,877
Likes
16,652
Location
Monument, CO
Thanks, no worries.

Aside: Gabor Temes, one of the fathers of delta-sigma converters, told me it was "delta-sigma". He used that terminology because there is first a difference (delta) block, then a summer (sigma) block, in the signal flow of the converter. He said an early paper (not his) got it backwards (a mis-translated Japanese paper, he showed it to me, with his hand-written correction back to them -- I was an IEEE JSSC reviewer at the time). Unfortunately it was oft-quoted and thus oft-repeated, wrongly. I've been in arguments with people who insist it is "sigma-delta" without really understanding why that's wrong. It is a don't-care for me as the context is clear but I use delta-sigma.
 

amirm

Founder/Admin
Staff Member
CFO (Chief Fun Officer)
Joined
Feb 13, 2016
Messages
44,585
Likes
239,396
Location
Seattle Area
For some reason, my eyes never latched on the reverse use of those two words! The operation is of course delta-sigma as you mention so strange that the reverse has caught on for something this technical.
 

Herbert

Addicted to Fun and Learning
Joined
Nov 26, 2018
Messages
528
Likes
435
There was a training handbook issued to German Sony engineers the times the Sony CDP-101 was introduced:
I stumbled over it while looking for a way to add an SPDIF output to my CDP-101. No problem with 2nd generation players,
but the CDP-101 seems to work different. As one example, the Master clock of 8.64MHz and bit clock of 2.16MHz is not evenly dividible into 44.1x16x2.
The bitclock seems also to invert and not run evenly, see photo. Does anyone know more?
Beyond the signals that are fed into the CX20017 DAC
The German text reads: "Following picture shows the timing diagram for :"
LRCK - Left / Right Clock (44.1KHz)
WLCK - Wordclock (88.2KHz)
BLCK - Bitclock (2.16MHz)
DIN - Serial Data
CC - Same signal as WLCK
LRCK Out, DCL / DCR were outputs do drive the buffer amps and switching of the analog stage as this mono DAC had to feed two channels.

Timing Diagram CX20107.jpg
 

AnalogSteph

Major Contributor
Joined
Nov 6, 2018
Messages
3,371
Likes
3,317
Location
.de
I'm afraid you're SOL when it comes to your mission (SPDIF out is not generally a thing pre-CXD1125Q or ~1986, at least 2 generations of chips later; aren't there enough other suitable candidates anyway?), but what a neat document! It's a complete primer on the CD system and a theory of operation for the CDP-101 in one. This arguably warrants uploading it to HFE.

The clock frequencies are related to the CD bitclock of 4.3218 MHz = 588 bits per frame with a frame clock of 7.350 kHz. Each of these so-called small frames contains six stereo samples. I would advise consulting the Wikipedia article on the CD data format.

Later designs thankfully started slaving the CD decoder to the audio clock rather than the raw bitclock. Philips again seems to have to made the sensible choice first, going with a 4.2336 MHz XO for the SAA7000 right from the start (CD100, 1982). The CDP-101 was a bit of a madlad design in general. The 2nd-gen CDP-102/302ES (1984, CX23035 - CX23034 - CX20152) had its master xtal at the DAC and slaved everything from there.
 

Herbert

Addicted to Fun and Learning
Joined
Nov 26, 2018
Messages
528
Likes
435
I'm afraid you're SOL when it comes to your mission (SPDIF out is not generally a thing pre-CXD1125Q or ~1986, at least 2 generations of chips later; aren't there enough other suitable candidates anyway?), but what a neat document! It's a complete primer on the CD system and a theory of operation for the CDP-101 in one. This arguably warrants uploading it to HFE.

The clock frequencies are related to the CD bitclock of 4.3218 MHz = 588 bits per frame with a frame clock of 7.350 kHz. Each of these so-called small frames contains six stereo samples. I would advise consulting the Wikipedia article on the CD data format.

Later designs thankfully started slaving the CD decoder to the audio clock rather than the raw bitclock. Philips again seems to have to made the sensible choice first, going with a 4.2336 MHz XO for the SAA7000 right from the start (CD100, 1982). The CDP-101 was a bit of a madlad design in general. The 2nd-gen CDP-102/302ES (1984, CX23035 - CX23034 - CX20152) had its master xtal at the DAC and slaved everything from there.
The second generation CX23035 (which was used just one year later) already provides the possibility to add SPDIF which I already did: CX23035 SPDIF out
 

Gradius

Addicted to Fun and Learning
Joined
Aug 17, 2019
Messages
663
Likes
425
Location
Iquique, Chile
So sad, because it was with the CD fault's it came that INFAMOUS 44.1kHz (176.4kb/s), and then it becomes worldwide available. It should had been 48kHz (192kb/s) since start.

All because the 9th Symphony, yeah I love that one, but if they wanted to fit that on a single CD, then just create more data on it for Christ's sake. If it was 48kHz then the CD would had been around 832MB in audio data (aka Red Book) size too.

When Philips started to work on their new audio format known as a compact disc, many groups argued over what size it should be. They planned on having a 11.5 cm diameter CD while Sony planned on 10 cm. One bright chap insisted that one CD ought to have the capacity to contain a complete performance of Beethoven's Ninth Symphony. The duration ranges from about 65 to 74 minutes which requires a 12 cm diameter, the size of a CD.

Philips engineer Kees A. Schouhamer Immink writes that Philips were pushing for the CD to be close in size to the cassette tape, whereas Sony were pushing for a slightly larger 12cm disc (that was after), partially because they knew that Philips already had a factory capable of producing 11.5cm CDs and if they could decide on 12cm as the industry standard then they would erase Philips' head start in manufacturing.

Eventually the maximum playing length was set at 74 minutes and 33 seconds, and in his grave Beethoven breathed a sigh of relief - modern consumers could now hear the whole Ninth Symphony in all its glory.

But there's a twist to this story. The real limit for CDs started at 72 minutes, not 74 minutes, since this was the maximum length of the U-Matic videotapes which were used for audio masters.
 

Herbert

Addicted to Fun and Learning
Joined
Nov 26, 2018
Messages
528
Likes
435
So sad, because it was with the CD fault's it came that INFAMOUS 44.1kHz (176.4kb/s), and then it becomes worldwide available. It should had been 48kHz (192kb/s) since start.
Ah, you can hear up to 24kHz?
 

Scytales

Active Member
Forum Donor
Joined
Jan 17, 2020
Messages
139
Likes
201
Location
France
So sad, because it was with the CD fault's it came that INFAMOUS 44.1kHz (176.4kb/s), and then it becomes worldwide available. It should had been 48kHz (192kb/s) since start.
Luc Theunissen, in Introduction to Digital Audio (Sony Europe Service Center training text, Belgium, P/N S-790-093-01), wrote this explanation about the choice of 44.1 kHz sampling frequency (page 13) :

Use of 44.056/44.1 kHz :

Another important criterion for selection of a sampling frequency lies in the fact that, to arrange the digital information in a video-like signal, (as it i s done in all the PCM-adapters which use a standard helical-scan video recorder as a storage medium) there must be a fixed relationship between sampling frequency (fs), the horizontal video frequency (fh) and the vertical frequency (fv). For this reason, these frequencies must be derived from the same master clock by frequency division, or in other words, fs and fh should have an as-low‐as-possible Least Common Multiple.

In the NTSC system, fh =15.734,2657... Hz (a non‐integer value, due to the necessary relationship with the NTSC chroma and audio subcarriers), whereas in the European PAL or SECAM systems, fh = 15.625,0 Hz.

Calculations have shown that, for the NTSC system, a frequency of 44.055,944... Hz would come closest to this ideal, whereas for the PAL system, a frequency of 44.100 Hz was also quite feasible.

The difference between these two frequencies is only 0.1%, which is negligible for normal use (the difference translates as a pitch difference at playback, and 0.1% is entirely imperceptible).

As a consequence, 44.056 has been adopted as sampling frequency in the EIAJ-standard for PCM-adapters for EIAJ, while 44.1 will be used by adapters for the CCIR-system, as well as for the future Compact Digital Audio Disc.


Thus, it appears to me that the choice of 44.1 kHz sampling frequency has less to do with the CD Audio medium per se than with the mass storage devices available to record and store data at the inception of digital audio.
 
Last edited:
  • Like
Reactions: 617

Scytales

Active Member
Forum Donor
Joined
Jan 17, 2020
Messages
139
Likes
201
Location
France
Later Soundstream PCM units even had 50kHz sampling frequency, the first prototype from 1976 started with 37.5KHz:

Thanks.

Interesting read !

Sony (e.g. early PCM 3324) and others (e.g. Mitsubishi X-80 and X-800) also built PCM recorders with 50.4 kHz sampling rate, as explained in the document I linked above (page 15) - there were stationary head recorders, possibly allowing the realization of multi-track machines. 50.4 kHz was chosen because divided by a factor of 8/7 it produces 44.1 kHz, hence the possibility to do direct dubbing between helical-scan and stationary head recorder, said Theunissen.

According to Simon Barber, the author of the article, the creators of the Soundstream system looked down upon the digital format of the CD. But in my opinion, the way the author reports those words should be taken with a pinch of salt, because the author seems to ignore the existence of Sony's and other Japanese manufacturer's 50 kHz sampling digital recorders (see note 23 in Barber's article). It seems to me that the choice of the sampling rate in this early age of digital recording were very much more the product of the electro-mechanical properties of the existing magnetic tape recorder techniques than anything else. I have the belief that Soundstream, 3M, Sony an other players in the field actually made the same kind of machine because they had to work within the same type of technical constraints.
 
Last edited:

Herbert

Addicted to Fun and Learning
Joined
Nov 26, 2018
Messages
528
Likes
435
Also interesting that almost half a decade ago rock musicians favoured analog becuse its flaws "enhanced" the sound...
 
Top Bottom