Fredk,

The science behind the double blind listening test is a very interesting one. I was at the NRC in the early days when we spent much time developing the double blind listen test and analyzing the results. It was at this time that the consistency of the results started to become apparent. We were getting very similar rankings of products, even across a broad spectrum of listeners with very different musical tastes and levels of exposure to high fidelity. It was this consistency of result that allowed us to begin developing a set of laboratory measurements that could be directly correlated to the listening test results. It was also shown that this correlation would be lost if the tests were done sighted; especially amongst those with high levels of exposure to high fidelity who knew what was under test and had a predetermined favorite. As time went by the double blind listening test became more refined along with our knowledge of what we could do in the laboratory to positively affect the result. In the beginning we were essentially “knocking off the big measurements” which meant the performance differences were large and the listening test results incredibly consistent; but as time went by, and we wanted to work on more subtle differences, both the double blind listening test and the laboratory measurements became more refined. We can now get very consistent listening test results on some very minor adjustments to the performance. If I had to boil it down to a few key components needed for a proper test, and this is leaving out a lot of important detail, I think I would go with 1) the dB levels of all the speakers under test must be the same, 2) the screen must be acoustically transparent, and 3) the listener must own the switch and the music. All of our listen testing is done one participant at a time and they have full control over the music they wish to play and when they wish to push the instantaneous switcher.

Obviously doing it this way is one laborious process. It can take all day, or longer, for just one listen test session. I am sure Steven Roode is painfully aware of this. When we were developing the VP180 he would check in on the progress from time to time and no doubt it must have appeared to be taking us forever to get through the listen testing of the various options we were working on. If we wanted to try just one more thing then another week would go by.

The advantage however is we do get consistent results from our listen testing to very minor adjustments made in the laboratory. We can also get consistent results between both experienced listeners and relative newbies. The experienced listeners tend to be able to get to their decisions much quicker than the newbies but the results are quite consistent. We never put any sort of time limit on how long you spend in the test session.


Ian Colquhoun
President & Chief Engineer