Detecting minimal differences is a common goal of all sorts of perceptual experiments...and that was what I was referring to. That's a very different issue from "perception of sound quality" - or could be. The first is well defined, the second could mean quite a few things.

I'm literally talking about tens of thousands of studies. I wasn't making a specific point about a specific finding, I'm just talking about the general state of the procedures of an entire field of research. I mean I could have given:

http://www.perceptionweb.com/

or a dozen other journals and said "start at volume 1", but that's not very helpful.

More to the point of the thread, if Harley thinks one or more of the standardized procedures is faulty, that's fine. Design a blind test that addresses that. But keep it blind. It's simply silly to suggest you need to know what equipment you're listening to.

By no means am I suggesting that ALL equipment reviews MUST included stringent DBT procedures. I'm more than happy reading the opinions of experienced listeners with a good track record, especially for things that are likely to have large, noticeable differences (e.g., different loudspeakers).

But if someone tells me that cables are "danceable", but blind testing repeatedly shows no differences in cables meeting certain minimum specs, I'll trust the blind test, thank you very much.