Since I can no longer edit the post and there's a page break between Ken's bug report and my original post I guess I'll put a fixed copy here.

Originally Posted By: bridgman
BTW this is getting terribly off topic but I do have to say that the bigger amp really made a difference in my enjoyment of the system. The speakers are in a large room (~4200 ft^3 for the immediate listening area) and that space opens directly into at least another 5000 ft^3.

BTW is there a well-understood technical term for the point at which a power amp starts clipping (ie continuous power + headroom) ? There used to be a lot of talk about "peak power" and I *think* that meant "power at clipping" but IIRC there were a few different potential interpretations depending on whether or not you factored in rail voltage droop and how hard the amp had been working before the peak came along.

Reason I'm asking is that I'm wondering how much of the improvement I would have seen with a high quality ~100W power amp. It's probably fair to assume that such an amp would have more relative headroom than the amp in my receiver so the effective power increase would be more than the 65W-to-100W change would suggest, but I'm not sure -- and am trying to talk myself out of picking up a used 100W amp to test with.

I don't really have time to get into amp-swapping at this point, and Schmidt's law still applies ("if you mess with a thing long enough it'll break").

That info seems like something that would be useful for people thinking about upgrading but not sure if it's practical to spec that way or if the dependency on "shape of the signal" is too much to work around. I guess some kind of standard test waveform would be needed and that's 20 years lost in a standards committee right there.

Last edited by bridgman; 08/15/14 05:39 PM.

M60ti, VP180, QS8, M2ti, EP500, PC-Plus 20-39
M5HP, M40ti, Sierra-1
LFR1100 active, ADA1500-4 and -8