Blind Listening Tests
Ian Colquhoun and Andrew Welker are in the Axiom Blind Listening Test room for the first in a series of videos - maybe four or five - on the topic of blind listening tests and their importance to acoustic research.
Ian Colquhoun and Andrew Welker are in the Axiom Blind Listening Test room for the first in a series of videos - maybe four or five - on the topic of blind listening tests and their importance to acoustic research. We'll also focus particularly on when it is important that a test is done blind as opposed to sighted.
Ian: I thought I'd start with just a bit of a history of how I was introduced to double-blind listening tests. It dates back to the early 80s when I first arrived at the National Research Council (NRC). They had a room set up there for the purpose of conducting double-blind research testing. They'd been doing it for quite a few years and gathering data on questions like
"Do people score sound the same way or does everybody have their own personal taste that is all over the map?"
Really, I think prior to that research at the NRC, most people did think that there were different flavours for different people: some of you may remember that there was the so-called West Coast Sound and East Coast Sound and British Sound and so on.
But really the research determined that that's not how sound works. People are overwhelmingly in agreement about how things sound: what's better, what's worse. Even people who swear they can't tell the difference in sound actually can tell the difference, down to some fairly minute detail.
And then you have the 'golden ears' - the people who make a career out of listening to things. They're great people, along with musicians, to have in these tests, because they can detail exactly what it is they're hearing and write that down so they can separate in great detail what aspect they are hearing that they liked or didn't like (or absolutely hated.) So a little bit of training or experience with instruments, and vocals, and sounds like that is of big benefit in these double-blind tests.
It was interesting - when I was first introduced to the double-blind test it was a real eye-opener. Prior to that we used to just build a product, and listen to it - we never thought about doing it blind. But there is no question in my mind the fact that there is visual bias and you cannot get around it. This comes down even to the point where we're taking measurements in the anechoic chamber and then we come to do listen tests in this room: if I know which is 'A' and which is 'B' - which speaker or which selection on the switch is which curve - there is absolutely no way that I can say "Okay, I'm going to put that aside for now and say that it doesn't matter." It does matter.
This was also proven at the NRC: if you took the blind screen away and did exactly the same test, the results varied enormously depending on whether the person thought 'the big one should be better' or 'that brand should be better'. So the idea of using a blind test is very important if you're going to use it for scientific research perspective.
I'm not suggesting anyone would want to listen blind themselves, but for the purpose of what we're doing it's a very important tool. Andrew has a lot of experience with double-blind testing as well, from the beginning of his career in the 90s at Audio Products International.
Andrew: Almost the entire time I've worked in this industry I've been introduced to blind listening tests. And of course when I joined Axiom five years ago, it was a very familiar aspect of what I was used to. I think it's very important to remember and consider that we use blind listening tests as a tool. It's a tool that is a part of the day-to-day tests we do in loudspeaker design - really, in any design. The reason for that is that we can't completely correlate all the measurements we are taking with what it's going to sound like.
Over the years you get an idea of exactly what measurements or what parts of measurements matter and are going to have a real impact to perceived sound quality. At the end of the day we can make all kinds of measurements, but until you sit down and listen to it and you bring that sort of subjective aspect to the design, there's no way to tell whether or not those measurements mean you've created a better product.
When we tend to do blind listening tests here, rarely (and it's actually fairly infrequent) we'll bring a competitor's product in to see how we stack up against a similar model or similar price in the marketplace. Really most of the blind listening that goes on here at Axiom is us looking at either a brand new model, or a new series version with our existing lineup.
Let's take an M80 for example: that product has existed since 1999 and it's been steadily improved with different versions - the Ti, v2, v3 and now we're at v4 - and the only way we can gauge the improvements that we are trying to make when we look towards a new version is by doing a blind listening test. Otherwise how to do we qualify that the new version actually performs better and sounds better than the current version? Obviously that's the only reason for making the version changes: we've learned something more about trying to correlate the measurements with what we hear, and really the blind listen test is confirmation of that.
Ian: When you're talking about something like the family of curves, it can be a collection of over 300 measurements depending on the speaker. We developed specific algorithms that we use to average these curves out. Our algorithms keep getting better and better with every year that goes by, and this creates a version change. We're up to version 4 in our current line. But the only way to prove that the algorithm is getting better is to subject the algorithm to a double-blind test.
In our listening room there are speakers and you pull an acoustically transparent cloth across them to do the listening from the seats across the room. If we're looking at something just in the family of curves, there will only be one speaker on each side - a right and a left - behind the screen, and we will change only the measurement of the speaker.
That's great because there is something called a position error: if you are doing a situation like this where you have two actual pairs of speakers you have to do the test twice. First you do it in the starting position, and then you have to switch the speakers to the other side so they're in their opposite positions, redo the test, and then average that data to get rid of the position error.
We tried a couple of times to create a switcher to actually move the speaker, but so far we haven't been able to do it fast enough, because you want almost instantaneous switching between A and B - it's much easier for people to be detailed in their critique if you do that. So we're still working on it. For now we simply redo the test and average out the two results.
So I think that's enough to start on subject of blind listening tests. We'll follow this up with a number of other videos. If there are questions or comments we'll try to include them in future videos. To summarize it's a very important tool in engineering. It's important to realize whatever test you're going to do sighted, you can also do blind, and also to realize it is an engineering tool and not necessarily something you want to do at home. It's complicated to set up and hard to do accurately at proper levels and so on.
In our next videos, we'll cover off the nuts and bolts of how to set up and perform a double-blind test, and the topic of distortion and why it is done with test tones in the chamber. Any other things you'd like to see covered?