Chesseroo,

I have only one graduate course in statistical analysis, however, I do know that the standards at the NRC were quite rigorous and I suspect that Dr. Floyd Toole could mount a strong defense.

I would point out these tests involved, typically, four or five panel members, and tests were repeated in 25-minute sessions over the course of, usually, three days (if we were reviewing four different speakers). This was done so that every speaker was auditioned by each panel member sitting in five different chairs in the room, and each speaker was placed in four different locations to randomize out the effects of room vs. speaker location vagaries as well as seating location.

Note also that these tests were first done in mono with single speakers, which very quickly lets you isolate glitches and non-linearities. When the tests were repeated in stereo, the individual rankings never changed. The absolute scores went up even for bad speakers, but the relative rankings remained the same.

With excellent speakers, the "comparably good" line became common when it became impossible to rank one good speaker ahead of the other because slight personal preferences would change with each track of music.

Home listening tests are rarely ever performed in mono with a single speaker and even if you do the test blind in stereo, I believe that biases shift because of the shift in soundstage that occurs with stereo tests. It's beyond the capability of enthusiasts to set up turntables that position stereo pairs in the identical spots in the room, which is the only fair way to do it. Turning one pair off, getting up and moving the other pair into the same spot and repeating the test simply doesn't cut it.

Regards,


Alan Lofft,
Axiom Resident Expert (Retired)