Originally Posted By bridgman
Ian reminded me that I had already encountered a case where the optimum room treatment changed a bit depending on the speaker I was using (believe that was with the M60s and replacing the couch) so I'm going to revisit the blind from a few posts ago (the one that made Sierras sound dead & lifeless) and see if I can get similar results without tone controls when time permits.


Finally got a chance to try the M3's with the blind pulled down (the change that mucked up Sierra-1 imaging). As Ian hinted, it had exactly the opposite effect with M3s.. cleaned up the imaging and significantly improved the overall effect. First guess is that the wider dispersion of the M3s meant that more midrange energy was bouncing off the glass... and since the other side of the listening area was open space (and a sofa) the result was unbalanced sound until I pulled the blind down.

This leads to the slightly scary observation that A/B speaker testing in an asymmetrical listening area is going to potentially be unfair to one of the speaker sets if the A and B speakers have significantly different response/dispersion patterns... and even a perfectly symmetrical room may end up requiring slightly different room treatments for best results with each speaker.

This may help to explain why reviewers often seem to end up preferring their reference speakers to most of what they test - if the room treatments are optimized for their reference speakers then anything significantly different is going to be tested with sub-optimal room treatments.

To further complicate things, I was up near Axiom before the holidays and Ian invited me to stop in and do some double-blind testing with the Sierra-1's. That was eye-opening to the extent that I'm still trying to get my head around it.

We did a few rounds of tests and in the second round I was ABSOLUTELY sure that Debbie had switched in completely different speakers. I was thinking maybe LFR's and Mini-T's from the sound (bass response and imaging both seemed significantly different from the first round) but in fact it was the same speakers as before (M3s and Sierras) with the positions physically reversed, the volume turned up a bit higher (I did that because I felt the "new speakers" handled the volume better) and the ventilator noise cut down by stuffing a rag in the vent.

I was totally surprised when the curtain was pulled back.

The third round was equally interesting - this time Ian was working the controls; he also picked a different set of tracks from me. As the volume increased the M3's seemed to open up while the Sierra-1's started to sound "hard"... and the relative frequency response differences between the speakers seemed less. Ian noted that drivers can start exhibiting compression at surprisingly low sound levels and so managing the compression characteristics of (for example) woofer vs tweeter was one more important part of maintaining good sound at different levels.

It's hard to summarize everything in a couple of lines but it's clear that choice of music and volume level make much more difference when comparing speakers than I had previously thought. I was less surprised by the impact of moving the speakers around simply because I had done a lot of that kind of testing already. I was also surprised by how much "less different" (and how good) the two sets of speakers sounded in Axiom's listening room compared to mine - possibly because their room is relatively symmetrical while mine is off-center with room treatments on one side balancing open space on the other.

Anyways, it was very interesting and educational - thanks to both Ian and Debbie.

Last edited by bridgman; 01/10/17 11:02 AM.

M60ti, VP180, QS8, M2ti, EP500, PC-Plus 20-39
M5HP, M40ti, Sierra-1
LFR1100 active, ADA1500-4 and -8