rebbiputzmaker
Major Contributor
- Joined
- Jan 28, 2018
- Messages
- 1,099
- Likes
- 463
Ok cool.I did that test and from what I recall, I could hear -90 dB test tone.
Ok cool.I did that test and from what I recall, I could hear -90 dB test tone.
I have seen this comment quite often on other fora but with no (convincingly scientifically valid for me) explanation, just the implication of "this must be obvious", which to somebody with no technical education it may well seem to be, but everything I learned told me that with a linear system tones were simply additive, so in fact testing with simple tones and testing with a mixture of multiple tones are surely exactly the same???This is not the place to start long discussion about what you can hear or not, just want to point out there is a big difference between measuring using sine tones and playing actual music which are very complex
A few random thoughts...
I've not (or only rarely) used linearity plots like that; other tests show the same thing, just a different plot results. As a way to show relative differences among DACs I like them. Note 0.1 dB ~ 1%, so 1% deviation is not an unreasonable criteria, but only about 0.3 bits if you reference to SNR based upon quantization noise. You could argue for almost anything but a consistent reference is good enough for me and I can always read the graphs for more info or to use my own metric. It's a relative comparison so as long as the basis is the same it works for me.
It is probably worth noting that correctly wiring pin 1 and in fact just using XLRs (balanced) does not guarantee no ground loops or common-mode problems. And transformers don't always exhibit great CMRR plus have plenty of other issues. Neither active nor passive differential designs are perfect, you're always making trades among compromises. Great and awful examples of either type (active or transformer) exist, natch.
The Rane note does not really address transformer vs. active differential circuits, or barely, unless I missed it (possible, quick skim). It does address the issues with certain schemes to convert between balanced (differential) and single-ended (unbalanced?) using transformers or active circuits. In those cases the active circuit implementation is critical and it is true rarely provides decent CMRR. In fact that is one thing I have decried for years, the proliferation of various quasi-differential schemes marketed as having the same benefits as fully differential, which includes a number of consumer and pro audio components that do not implement a truly differential transmitter and receiver but throw in a TRS or XLR jack and call it "balanced". Even something as simple as a resistor on the "cold" leg so the impedance looking in is roughly the same as the "hot" leg is marketed as balanced based on the "balanced" impedance but in practice CMRR is not much better than a single-ended connection (i.e. almost nil, often as low as 6 - 20 dB).
Sure if the device is not linear there will be intermodulation products but DACs, in general are actually pretty linear, so do you have an explanation as to why tone testing is not valid for music, apart from it being obvious music is more complex? I realise it is not always known that music is a sum of mixed tones and I could be over simplifying/wrong?
Under changing signal conditions the states of sigma-delta modulators are often undefined.
Isn't a sine wave a "changing signal condition"?
No, it's steady-state. It has to be for the analyzer to analyze it.
Isn't a swept sine wave a "changing signal condition"?
So you are concerned that a Delta sigma modulators may not be linear in an unknown and unpredictable way if the signal isn't steady state?Music is not steady-state. Under changing signal conditions the states of sigma-delta modulators are often undefined. Can you offer a repeatable, non steady-state test methodology besides music?
Wouldn't 3dB be 1/2 LSB? That sounds like a good figure of merit when we're >100dB down from full scale.
Balanced connections do help, though, even if we only get 40 or 50dB at 50 or 60Hz. About the only place we want or need more is in measurement equipment, like the AP or in medical electronics where we're often measuring signals in the nanovolt range.
I've worked designing professional audio equipment for ~50 years and have used resistive balancing on outputs on many a product design. There is nothing to fault there. CMRR is dependent on having equal resistance/impedance on both legs of the balanced transmission link, the burden falling on the input side where the higher resistance/impedance lies. It does not depend on both sides of the transmitter having equal signal voltages; that only buys you higher output voltage capability.
Music is not steady-state. Under changing signal conditions the states of sigma-delta modulators are often undefined. Can you offer a repeatable, non steady-state test methodology besides music?
Isn't the question one of 'logic', though, rather than transistor level simulations? I would have thought that it would be possible to simulate at a higher level than transistors in order to establish the principle of operation - in which case it would be very fast. You could simulate all the transients you want; use real music, etc. and establish pretty much what the DAC would be producing without having to resort to measurements or SPICE simulations..?Define "entirely". Most of it can be, at least at a basic level, from high-level sims to mixed-mode simulations with behavioral models to transistor level, but it's a big task and there are always things in the real world not in the sims. And despite the speed of modern computers simulation time is still a problem; setting up multiple signals with long record lengths and hundreds of thousands of transistors or more is very time-consuming. Dealing with long time constants, multiple interacting loops, and signals with very low-frequency content often add up to impractical simulation space. In my world running a behavioral model like an IBIS-AMI receiver can take an hour or two to get just a few microseconds of data. And that is to gain a tiny snippet of steady-state'ish information. Running a transistor-level simulation to capture the same time period is not practical (too long to run even if you could get the simulator to converge, another nightmare with large circuits).
Plus the sims are only as good as the models and the test conditions. I sim'd the snot out of the DS converters I was designing for months and still barely scratched the surface. A few seconds in the lab obviates most sims, but in the world of IC design, simulation is required to get as good as you can before before a few million on mask sets. We will spend a year or more simulating everything we can in the design and there can still be problems with the chips, leading to mask spins and so forth...
FWIWFM - Don
Humans recognize sounds as real or not based largely on the attack (it's an evolutionary thing to keep from being eaten by a mountain lion). Looking at the rise time of a square wave will at least tell a little more.
Isn't the question one of 'logic', though, rather than transistor level simulations? I would have thought that it would be possible to simulate at a higher level than transistors in order to establish the principle of operation - in which case it would be very fast. You could simulate all the transients you want; use real music, etc. and establish pretty much what the DAC would be producing without having to resort to measurements or SPICE simulations..?
I'm sure there's all kinds of real world analogue issues to deal with, but the implication earlier from someone was that the actual 'algorithm' is suspect and that steady state signals (even mixtures of tones) wouldn't reveal the problems. My expectation is that real music could be fed into the algorithm and the simulated analogue output (assuming ideal filter) examined for absolute deviations from the near-perfectly reconstructed waveform, calculated with 64 bit arithmetic. If we fed in a whole CD's worth and found that the absolute instantaneous output never deviated more than steady state tests suggested it should, we might feel less helpless about it. If it revealed 'nasties' then the heritage DAC people are right.The logic you can simulate, yes, but even that is tricky when you have long filters and feedback that makes the results signal-dependent. Defining a good set of signals and adequate range of test cases is non-trivial. But yes, principle of operation is not too bad to simulate, but that is usually just the starting point. The devil is in the details, and even at the logic (e.g. RTL) level the simulations can be very long and output files huge. Simulating a few seconds of music, even at the logic level, can take hours though I do not have much experience with that and at my workplace the projects involve millions of gates so probably more complex than a delta-sigma DAC. But you still have to model the delays and such to ensure timing closure and those little "extras" add a lot of time to the simulations.
That said, the simulations of 1k to 1M points used in the DS DAC simulations I presented in the articles on WBF (and copied here, thanks Amir) only took from a few seconds to a few minutes so for basic analysis it's quick. The architectures were very simple, however, low-order modulators and I did not use long and fancy FIR filters like a real design would use, and that was Mathcad/Matlab so not even at the gate level but one level up.
ADCs are trickier since a lot of analog goes on in the loop and determining stability from just an RTL simulation is tricky.
And of course RTL does not tell you how the analog input and output filters and buffers behave. That is why I asked for your definition of "entirely". In a six-month design effort, I probably spent a week or two on the basic "logic" simulations; the rest of the time was designing the actual circuits and simulating them (and their layouts) with more realistic "real-world" models and parasitics. But, I may be wrong, especially for audio, since the rates are low enough that timing and settling are less a concern? I tend to doubt it; my LF and audio design experience (even my RF data converters had to work down to DC -- IQ systems often require that) says designing an 8- to 14-bit system at 1 to 10+ GS/s and a 200 kS/s, 24-bit converter has many of the same challenges. Bigger devices are needed for lower noise and better matching, long settlers are hard to identify (and often are not simulated, like thermal tails), and things like flicker and popcorn noise are not always adequately modeled (if at all). Quasi-saturation and sub-threshold effects, leakage, the list is endless of things that can and do go wrong at the transistor level.
So at one level you can simulate the logic fairly quickly, but it can take a while to really wring it out, and at the end there is still often a big gap from that to how the circuit performs on the bench (or in a plane, or space...)
But, this has made me think it would be interesting to dig up some of my old work and revisit it to look for some of the things that the logic ought to show... Most of that was proprietary, alas, so I'd have to recreate a bunch of even the basic simulation setups. I'll probably just go flip on the stereo instead since it's Saturday afternoon and I have to work tomorrow.
Now I do question the idea delta sigma DACs are harsh from non-harmonic distortion. What's the basis for that? What do you think is going on? There are certainly ways to detect non-harmonic spurious signals with various test signals.