Echoing Bren's comments... it seems that Harley's conclusion is that, 'if the test is to determine whether an error is audible, and given that the error exists, if no error is detected, then the test mut be faulty.'

It seems that this totally misses the point. If the purpose is to detect the anomaly, then perhaps double blind testing isn't the way to go. But if the purpose is to determine if the anomoly is detectable, thendouble blind testing may be an apporpriate means of doing this.

The purpose, it seemed to me, of the test was to determine if the compressed codec proposed for use would be detectably different (by the audiophile, let alone the commoon man) from the original, uncompressed sound. The test determined that the error was indetectable - i.e., good enough for broadcast standards.

If the test was aimed at challenging audiophiles to find and locate an error, then perhaps it might make sense to tell them which sound was compromised and permit them an opportunity to first listen to the unadulterated signal, then listen to the error laden one and try to tell where the differences lie.

The first test is a practical one, for use in the real world, whereas the second test seems academic, at best.

I think that Harley has really drank the coolaid on this one. His example to "prove" the case - that monoblocks, tube amps and a $99 receiver all sound the same in a double blind test - does nothing more than to reveal HIS obvious bias.