"Rated power bandwidth" was the specification. It's a tricky term because it's not "rated power" as such but the bandwidth over which the amplifier can deliver its rated power to -3dB (half power). It was part of the FTC amplifier rule IIRC.
@DonH56 may comment on this.
Disclaimer: I was cruising along and commented without noticing this was the Benchmark thread. All my comments should be treated as non-specific to the AHB2 or any specific amplifier.
Not really, been too many years, and that spec (among others) seems to have fallen out of vogue. Back when the world was young and John and I were little boys tearing apart our parent's TV sets (at least I was, first color TV, had it in pieces on day two to see where the color came from; my posts today are testament that I was able to reassemble it and it worked and thus lived to tell the tale) full-power bandwidth became a Big Deal. I think it was around the late 1970's maybe into the early 1990's when ultrawideband bandwidth (speaking of Citation
) and vanishingly low distortion became all the rage. The full-power BW spec I don't see much today. AFAIK it is still in the FTC spec, but the rules were relaxed at the request of manufacturers making multi-channel receivers and amplifiers for home theater and such since not all channels needed full power (specs), or so they said. Resulting in the mess today where a multichannel AVR rated for 100 W/ch only delivers about 30 W/ch when they are all driven (I have one of those, alas, a Sony Elite, which I did not expect to drop off so much).
Wandering aside, John is correct (of course) that it was a measure of the bandwidth of an amplifier measured relative to full rated power at 1 kHz (I think). A number of new SS "superamps" claimed DC to hundreds of kHz or more bandwidth (which never seemed real desirable to me but that's another discussion). Tube amps tended to roll off at LF and HF due to the transformer and other parasitics as well as design choices so the new SS amps could show their performance advantage by touting much greater full power bandwidth. But of course even SS was not a panacea; LF response was generally better, though the debate rages to this day how much response below 10 or 20 Hz really matters, and for some HF response was much better and for others equal or worse. And of course some wideband amps would oscillate and destroy themselves with the right load (speaker).
The test was simple; set the amp to deliver rated power at 1 kHz (typically and into a resistor test load), then sweep frequency high and low to find the -3 dB point. That defined your full power bandwidth. Only power mattered, I don't think there was a distortion spec but it has been ages and I'm too lazy to find my old copy of the FTC (or IHF) spec. Distortion at those limits could be "interesting", so of course you rarely found distortion specs tightly coupled to full-power bandwidth, or often specified much higher. You'd see bandwidth specified at 1 W or whatever to give really wide frequency numbers, full-power distortion at 1 kHz or over a more limited frequency range (e.g. 20 - 20 kHz no matter the full-power bandwidth) to provide some nice low numbers (say 20 Hz to 20 kHz at 0.05%), and then the full-power bandwidth at some higher number (say 5 Hz to 100 kHz at less than 1% THD). Essentially manufacturers could "skirt" the rules by printing a whole bunch of specifications to fill the datasheet but with almost nil correlation among them. You (I, John, etc.) had to bench (test) the amp to see what it would really do.
Damping factor is simply load impedance divided by amplifier output impedance. It is a simple number and (I remain convinced) used mainly because you can generate big numbers for marketing, and bigger is better so it helps sell amps. And yes there were plenty of arguments that amps with >1000 DF (e.g. Phase Linear) didn't matter in the real world because as soon as you added speaker cables the effective DF the speaker saw was reduced by an order of magnitude or more (still waiting for superconducting speaker cables). And virtually nobody (then or now) commented on the wires inside the speaker, impedance of the crossover, or tiny little wires in the voice coils, let alone how phase influenced it. Providing damping factor as a single number does not tell you much about how a particular speaker's sound is affected except "higher is better". How high depends upon the speakers, cables, etc. And few manufacturer's these days provide damping factor over frequency -- some provide a few points, like 20 Hz, 1 kHz, and 20 kHz, and far fewer provide an actual curve. Some provide output impedance but often at a single frequency (or do not specify the frequency). And so it goes.
While some designs exhibit reduced damping factor (higher output impedance) at low frequency, most (especially SS) amplifiers have high damping (low output impedance) at LF, falling as the feedback factor is reduced due to feedback loop bandwidth. You can put a whole bunch of devices in parallel and use fat wires in the amp but to get really low output impedance requires feedback. Feedback senses the output and forces it to track the input, so can make an amplifier look like it has near-zero output impedance -- assuming the feedback factor (loop gain and bandwidth) is high enough. It is hard to get wide bandwidth with high gain and stay stable, plus it's expensive, so feedback factor is one of those design trades designers make. Including "none", as seems to be in vogue these days in some circles, alas.
I try not to make definitive statements about how output impedance (Zout) affects the sound because, as mentioned before, it very much depends upon the whole system. An amp with higher Zout placed right next to the speaker with short cables may have lower frequency variations than an amp with much lower Zout (much higher damping factor) placed twenty feet (what, ~6 m?) away and driving longer cables. And of course the speaker's impedance matters; one with relatively flat impedance is less sensitive to amplifier Zout. Then add preference; people may prefer the sound of certain speakers with certain amps because they like the resulting frequency response. Nothing at all wrong with that, but try explaining why their choice is subjectively (and measurably) worse and it's a cat fight. Best sip the wine, enjoy the music, and nod at the comments about how good it sounds. And chances are it really does sound great.
FWIWFM - Don