OP starts about half way down, marked in red. This first bit is important insights gleaned from discussion from start until morning of 6/7/23 - added for your convenience
As the titlesays said - damping factor. Clarification 6/7/23: it now says output impedance to better reflect the discussion and represent the issue, because damping factor is not the subject - it's only calculated from the output impedance which is the actual measurement performed. Amplifier output impedance affects the sound of speakers, especially those with passive crossover networks, and very especially if the passive crossover networks are complex (eg. 60 parts), or if the drivers are contained in exotic enclosures (eg. multi-way with chambers ported into chambers before exiting the cabinet... the list could go on, but you get the idea). For the reason that damping factor is most popularly associated with amplifier control over woofer cone movement, damping factor is not the best term to use. Yes, woofer control is a reason to test amplifier output impedance, but it's more often going to be a secondary consideration because most amplifiers will be within acceptable range. This isn't the case with crossover interaction - even lower impedances end up causing potentially very audible problems (again, there are many, and how they manifest vary by design {precisely why the output impedance measurement should be done})
(the following two edits are added from insights gained through discussion, as was the paragraph above)
Edit: More specifically, a plot of frequency vs. impedance.
Edit2: If the test is more complex than running a script, an alternate, easier version could be: when frequency response from 20Hz to 20kHz is flat [into a speaker impedance of 4/8 ohms]: possibly a single number will suffice. When it's not: a plot!
Single button: A plot? Why not?!
Edit3: (my own addition, another justification for the importance of the measurement): when output impedance is too high, passive crossover networks start misbehaving: Crossover points shift, effectively undoing the tuning - leading to phase issues which causes, among other things, reduced clarity and lower quality imaging. Even a half ohm output impedance will cause audibly obvious and very measurable problems. This is with a damping factor in the 10-16 range with typical 8 ohm speakers (may be why SET-ers like single drivers on open baffles..).
With output impedance, lower is better. Lowering past the point of the lower-mid hundreds introduces new design challenges which have other effects on the amplifier, not least of which is increased crossover distortion from using more than a pair of transistors per channel.
Output impedance example: A 3m 10 gauge wire (typical hi-fi with $1-4K amps and speakers) has just under 0.01 ohm resistance (we'll use 0.01 from the added resistance from contacts). An amp with a DF of 400 for 8 ohm speakers' output impedance is 0.02 ohms. Put them together and you're at 0.3 ohms total. DF 400 amplifier has become 267 in reality. With DF 600: 342. With DF 800: 400! 342-267 = 75 and 400-342 = 58
The subtraction shows the rapidly diminishing real-world returns on high DF investment past mid-hundreds (this isn't even considering the actual drivers themselves, which deal with even further increased resistance from their crossover coils. Considering the gauge of most woofer coils in non-top end speakers, anything over 200 is likely overkill)
And now, back to your regularly scheduled programming:
"The OP!"
Most amps getting reviewed here today are solid state and usually decent quality, and because of this, most meet the minimum damping factor for proper operation. Although it's easy enough for an amp to be designed for a >40 damping factor, manufacturers don't necessarily make the right decisions (sadly we see it happen often enough in other aspects of design [and once is too often]. Especially annoying is when choosing the correct configuration wouldn't add to the cost and there's no way the engineers don't know the superior way...).
An example: if someone has a pair of speakers with exotic tuning - chambers ported to chambers, 60 part crossovers, 4 ohm nominal impedance, the valley between the impedance peaks at 38Hz and 91Hz is the deepest trough: 1.7 ohms.
Not by much though, because between 91 and 172 it's 1.9.
Especially in this case, the [high] current required must flow effortlessly. If not, there's no hope to further combat the similar amounts that they carelessly dump back at odd angles. Reactivity (having to account for it in amplifier choice) is the price paid for the more accurate bass provided.
Speakers more considerate of amplifiers, like those in sealed boxes, are less demanding, but they, too, have a point at which an amp with a higher output impedance begins affecting their sound.
For those who might not know, damping factor is the ratio of speaker impedance to amplifier output impedance. If you have a 10 ohm speaker and a 1 ohm output impedance, the damping factor is 10. And a 4 ohm speaker and 0.1 ohm output? 40!
The minimum value required for accurate sound varies. The damping factors of SET amps and some other tube designs often fall short of 40. They still sound good, warm; it's a part of their charm - like their holographic imaging.
I think damping factor is a useful metric to test.
If not in all amps, at least the more interesting designs like class A
As the title
(the following two edits are added from insights gained through discussion, as was the paragraph above)
Edit: More specifically, a plot of frequency vs. impedance.
Edit2: If the test is more complex than running a script, an alternate, easier version could be: when frequency response from 20Hz to 20kHz is flat [into a speaker impedance of 4/8 ohms]: possibly a single number will suffice. When it's not: a plot!
Single button: A plot? Why not?!
Edit3: (my own addition, another justification for the importance of the measurement): when output impedance is too high, passive crossover networks start misbehaving: Crossover points shift, effectively undoing the tuning - leading to phase issues which causes, among other things, reduced clarity and lower quality imaging. Even a half ohm output impedance will cause audibly obvious and very measurable problems. This is with a damping factor in the 10-16 range with typical 8 ohm speakers (may be why SET-ers like single drivers on open baffles..).
With output impedance, lower is better. Lowering past the point of the lower-mid hundreds introduces new design challenges which have other effects on the amplifier, not least of which is increased crossover distortion from using more than a pair of transistors per channel.
Output impedance example: A 3m 10 gauge wire (typical hi-fi with $1-4K amps and speakers) has just under 0.01 ohm resistance (we'll use 0.01 from the added resistance from contacts). An amp with a DF of 400 for 8 ohm speakers' output impedance is 0.02 ohms. Put them together and you're at 0.3 ohms total. DF 400 amplifier has become 267 in reality. With DF 600: 342. With DF 800: 400! 342-267 = 75 and 400-342 = 58
The subtraction shows the rapidly diminishing real-world returns on high DF investment past mid-hundreds (this isn't even considering the actual drivers themselves, which deal with even further increased resistance from their crossover coils. Considering the gauge of most woofer coils in non-top end speakers, anything over 200 is likely overkill)
And now, back to your regularly scheduled programming:
"The OP!"
Most amps getting reviewed here today are solid state and usually decent quality, and because of this, most meet the minimum damping factor for proper operation. Although it's easy enough for an amp to be designed for a >40 damping factor, manufacturers don't necessarily make the right decisions (sadly we see it happen often enough in other aspects of design [and once is too often]. Especially annoying is when choosing the correct configuration wouldn't add to the cost and there's no way the engineers don't know the superior way...).
An example: if someone has a pair of speakers with exotic tuning - chambers ported to chambers, 60 part crossovers, 4 ohm nominal impedance, the valley between the impedance peaks at 38Hz and 91Hz is the deepest trough: 1.7 ohms.
Not by much though, because between 91 and 172 it's 1.9.
Especially in this case, the [high] current required must flow effortlessly. If not, there's no hope to further combat the similar amounts that they carelessly dump back at odd angles. Reactivity (having to account for it in amplifier choice) is the price paid for the more accurate bass provided.
Speakers more considerate of amplifiers, like those in sealed boxes, are less demanding, but they, too, have a point at which an amp with a higher output impedance begins affecting their sound.
For those who might not know, damping factor is the ratio of speaker impedance to amplifier output impedance. If you have a 10 ohm speaker and a 1 ohm output impedance, the damping factor is 10. And a 4 ohm speaker and 0.1 ohm output? 40!
The minimum value required for accurate sound varies. The damping factors of SET amps and some other tube designs often fall short of 40. They still sound good, warm; it's a part of their charm - like their holographic imaging.
I think damping factor is a useful metric to test.
If not in all amps, at least the more interesting designs like class A
Last edited: