An Axiom Experimental Study by Alan Lofft, Ian Colquhoun and Tom Cumberland
In the process of finding the amplifiers, loudspeakers, and subwoofers to bring the pleasures of high fidelity music into our homes, we encounter lists of technical specifications that include distortion measurements. Distortion is considered to be any unwanted noise or deformation of the audio signal that the device amplifying and reproducing the music might cause. The culprits include any equipment in the reproduction chain through which the audio signal passes on its way to becoming sound in your living room and reaching your ears. Working backwards, this includes loudspeakers and subwoofers, amplifiers (transistor or tube), mastering recorders, preamplifiers, digital signal processing (DSP) chips, analog and digital recorders, mixing boards, and microphones. It's a large subject, but for these tests we focused on potential loudspeaker and subwoofer distortions and their audibility with music playback, using pure tones as a noise test signal.
Debbie Swinton, analyzing the audio configurations
during one of our listening sessions.
Lists and technical numbers purporting to describe distortion that a device introduces—like the commonly stated percentage of Total Harmonic Distortion plus Noise (0.03% THD + N) seem fairly abstract on the page. We can scan the percentages or look at the graphs of relative distortion versus an amplifier or loudspeaker's output at various listening levels, but it's hard to imagine what those figures represent under real-world listening conditions. Will we hear it with music? How much distortion can we tolerate or even detect? Are there some types of distortion that are more audible than others? How much distortion does it take (and at what frequencies) before it intrudes and spoils the listening experience? Are there benefits or penalties to reducing distortion levels that are already inaudible? For example, in the 1970s, one of the unintended consequences of the rush to reduce distortion in early solid-state amplifiers by adding large amounts of negative feedback to the circuit (doing this reduced amplifier distortion levels to vanishingly low numbers like 0.001% THD) resulted in a type of distortion called Transient Intermodulation Distortion (TIM), first uncovered by Finnish researcher Matti Otala. On sudden musical transients in a solid-state amplifier using large amounts of negative feedback, TIM could excite the amplifier into oscillation, resulting in an audible and nasty high-frequency distortion. Consequently it's worthwhile viewing distortion numbers with caution when making a purchasing decision. Ever-lower distortion numbers may not contribute to improved sound quality.
In real-life situations, we're all too familiar with distortion as extraneous, unwanted noise a cell phone ringing or people coughing during a concert, for example or the background rumble of air-conditioning or air-circulation equipment in an auditorium during quiet passages of a recital.
The history of recorded and reproduced sound has always been towards improving recording and playback equipment to reduce distortion and get rid of extraneous noise and artifacts that distract us from musical enjoyment. So any noise that isn't part of the original music signal must be viewed as a distortion, including that added by the analog recording medium. The LP era had its groove swish plus the irritating pops and clicks of vinyl-pressing imperfections and accumulated dust. Even the original master tapes fed to record-cutting lathes contained high-frequency hiss, the latter still audible on many CD reissues of early tape masters that haven't been processed through noise-reduction devices (in removing the noise, these devices may introduce other audible artifacts that may degrade the signal!). Looking back, many of us played and enjoyed vinyl discs for decades because it was the only practical, decent-sounding and inexpensive recording and playback medium available. However, in retrospect, it's amazing how much noise and distortion we tolerated along with our music.
For all but a few, the introduction of digital recording and playback systems banished the most annoying distortions of analog disc recording and playback groove noise, tape hiss, ticks and pops, slow and fast speed irregularities (wow and flutter, respectively), turntable rumble, tracking distortion and the sometimes severe dynamic limitations of phono cartridge playback. The latter imposed a type of dynamic compression when things got too loud and/or contained too much bass for the grooves of a vinyl disc to contain. (Except for a few direct-to-disc recordings, virtually everything recorded onto vinyl was processed through a dynamic limiter or compressor, otherwise no phono cartridge would play the grooves.)
Loudspeakers and subwoofers are analog devices, but the last three decades of loudspeaker design have seen gradual improvements in reducing potential distortion. Still, the lingering question remains: when does distortion become audibly significant?
This is what Axiom set out to determine, and to do so in a controlled experimental environment using blind and non-blind listening tests with a group of listeners ranging in age from 22 to 60. These tests were a more formal, scientifically controlled version of tests Axiom did on the same subject decades ago.
Anecdotal reports and conventional wisdom have suggested that distortion may become a serious problem at levels below 1%. Indeed, some reviewers and listeners have claimed to detect audible glitches at a fraction of this value. While previous researchers using sine-wave test signals and headphone playback have reported thresholds of detection well below the 1% level, research using actual program material music and loudspeaker playback indicate that the masking effects of music may conceal audible distortion until it increases to levels well above the 1% level and our ears begin to detect its presence. The effects of perceptual masking of other musical frequencies have been well documented: When a quiet musical sound a harp, for example-- is close in frequency to a louder musical signal a trombone section, say then we simply don't hear the harp because the trombones' powerful sound masks the delicate tones of the harp. This phenomenon forms the basis of perceptual coding systems like Dolby Digital, dts, MP3, Windows Media and other data-reduction schemes, which dramatically reduce the amount of data storage required for complex music signals and multi-channel soundtracks on DVDs and other digital media. Such data-reduction schemes or algorithms are called lossy because large amounts of data are discarded, which frees up additional space for audio and video storage on the disc. Other recording systems such as CD, SACD, and DVD-Audio are lossless because no data are discarded.
Distortion and the masking effects of music are especially important with loudspeaker and subwoofer playback, since these two devices are the actual producers of sound in the listening room. And because they use reciprocating motor assemblies, the mechanical excursion of voice coils, diaphragms, cones and domes makes them more susceptible to unwanted movement and other distortions in their mission of pressurizing and moving air molecules to create audible sound waves.
The Test Procedure
Two selections of rock/pop music of limited dynamic range typical of modern-day rock recordings were selected from Phil Collins's Hits and Barenaked Ladies' Gordon albums. The limited dynamic range of +/- 5 dB allowed us to introduce the noise without having to follow the music level up and down throughout the song. The musical selections on CD were played in stereo on a pair of wide-range loudspeakers (Axiom M80ti's) operating with an Axiom EP600 subwoofer. Pure sine-wave tones at fixed frequencies were played over a third M80 speaker and EP600 subwoofer (all speakers and subwoofers were concealed behind an acoustically transparent but visually opaque curtain), at gradually increasing loudness levels until the listener detected their presence and signaled by a raised hand that something in the music doesn't seem right. Bryston and Yamaha Pro amplifiers were used in the playback chain.
The first group of eight listeners (average 92 dB SPL playback level) ranged in age from 20 to 60 (+/-3 dB) and included a mix of male and female auditioners chosen from Axiom staff as well as a few guests who happened to be visiting the Axiom plant in Northern Ontario. All reported normal hearing acuity. Each listener participated in individual listening sessions that typically lasted about 20 to 25 minutes. The tests were then repeated with an additional eight listeners at an average level of 89 dB SPL, and a further group of eight was tested at 86 dB SPL. Interestingly, the results demonstrated that the average amplitude level of the music playback did not affect the noise detection results. At each test level, the ability of listeners to detect the noise distortion depended on the relative loudness of the noise signal to the music, not on the overall average loudness of the music.
The noise or distortion test signal consisted of pure tones at fixed frequencies of 20 Hz, 40 Hz, 80, 120, 160, 200, 240 Hz and so on up to a high-frequency limit of 10 kHz. The test signals were chosen to simulate what loudspeakers would do under normal operating conditions. The pure tones were played over the third loudspeaker and subwoofer at gradually increasing levels of loudness along with the music from the stereo speakers and subwoofer. The test tone would be left on for two or three seconds, then off for a similar period of time, the test tone gradually increasing in loudness until the listener detected the noise, signaled by raising his or her hand. The tone would then be reduced in level until the listener lowered his or her hand indicating that the tone couldn't be heard anymore. This would be repeated to verify the noise level at which the listener detected the distortion. Data were recorded for each listener and each test frequency for both of the musical selections and plotted on a graph. Figure 1 shows the final average result for all 24 listeners.
Initially, individual listeners were told to raise their hand only if they heard something in the music that doesn't seem right. As the tests proceeded, alternate listeners were told the exact nature of the teststhat we were going to introduce a series of pure tones along with the music at increasingly louder levels. We also specified the exact frequencies and the order of the noise signals, i.e., 20 Hz, 40 Hz, and so on. Amazingly, the results were the same whether the listeners were told the test procedure in advance or not. Tests were then repeated beginning at 10 kHz, moving down in frequency to a lower limit of 20 Hz. Again, the results were the same whether we started at the top and moved down, or began at 20 Hz and moved up to 10 kHz.
The tests were repeated with music at average listening levels of 86 dB SPL, 89 and 92 dB, measured at the listening seat with a professionally calibrated Sound Pressure Meter. Subjectively, these levels ranged from normal to quite loud. All the listeners were comfortable with these playback levels. In the graph in Figure 1, the horizontal line at 0 dB represents the average level of the music. Typical dynamic range between +5 and -5 dB could be measured during playback of the musical selections. The small squares on each curve represent the specific frequency points of the sine-wave distortion fed into the listening room along with the music. As can be seen in Figure 2, the results with the different musical selections track each other very closely, indicating that the nature of the particular music did not significantly change the results.
Figure 1 shows the combined average of all results together with a sloping trend line representing the test subjects' average ability to detect the distortion at lower levels as the frequency increases. The graph in Figure 3 documents the individual detection curves for each of the eight listeners at the 92-dB average listening level. The congruity is remarkable. Only one obvious deviation at 10 kHz, for the oldest listener in the tests, shows any significant departure away from the other listeners' curves. Even at much higher frequencies, 5 kHz for example, the distortion tone had to be raised to an average of just 30 dB below the music level (about 3% distortion) before listeners could hear it along with the music.
While it is has been recognized for years that human hearing is not very sensitive to low bass frequencies, which must be reproduced with much more power and intensity in order to be heard, what these results show is that our detection threshold for noise (made up of harmonically related and non-harmonically related test tones) is practically non-existent at low frequencies. (The noise test tones are noise in the sense that they are not musically related to tones commonly found in musical instruments.) In fact, the noise tones at 20 Hz and 40 Hz had to be increased to levels louder than the music itself before we even noticed them. Put another way, our ability to hear the test frequency noise tones at frequencies of 40 Hz and below is extremely crude. Indeed, the results show we are virtually deaf to these distortions at those frequencies. Even in the mid-bass at 280 Hz and lower, the noise can be around -14 dB (20% distortion), about half as loud as the music itself, before we hear it.
Axiom's tests of a wide range of male and female listeners of various ages with normal hearing showed that low-frequency distortion from a subwoofer or wide-range speaker with music signals is undetectable until it reaches gross levels approaching or exceeding the music playback levels. Only in the midrange does our hearing threshold for distortion detection become more acute. For detecting distortion at levels of less than 10%, the test frequencies had to be greater than 500 Hz. At 40 Hz, listeners accepted 100% distortion before they complained. The noise test tones had to reach 8,000 Hz and above before 1% distortion became audible, such is the masking effect of music. Anecdotal reports of listeners' ability to hear low frequency distortion with music programming are unsupported by the Axiom tests, at least until the distortion meets or exceeds the actual music playback level. These results indicate that the where of distortionat what frequency it occurs is at least as important as the how much or overall level of distortion. For the designer, this presents an interesting paradox to beware of: Audible distortion may increase if distortion is lowered at the price of raising its occurrence frequency.
Next episode: The effects of harmonic distortion
The tests done in this experiment are essentially noise tests; things such as mechanical resonances and port noises that are not harmonically related to a specific fundamental contained in the music would be examples of noise distortion. Other types of distortion such as Harmonic Distortion and Intermodulation Distortion have a direct relationship to a frequency being reproduced as part of the music. These types of distortion may be harder to detect than straight noise distortion; a subject for a future round of experiments.