Originally Posted By: audiosavant
DVD-Audio has now become a generic term in professional circles. DVD-A, the actual format...
Just a semantic thing, but the disc format was actually called DVD-Audio, most people shorten it to DVD-A. If you're talking sound recorded on a DVD, they you should say DVD audio, no hyphen, no capital A.

 Quote:
Correct, but that sound started life as an analog waveform. And was captured and pre-amplified in the analog domain then converted via (hopefully) high quality DACs before becoming digital.
So why put it on tape, with its inherent weaknesses before handing it to the ADC (Analog to Digital Converter)? (DACs go the other way.)

 Quote:
And that's what makes the most difference in digital recording, the quality of the conversion that turns those wonderful little continuous analog waveforms into chopped up evil little ones and zeros.

Bit depth and sample rate frequency then come into play. And dither. And jitter. And stable clocking and... a myriad of other factors that combine to "recreate" analog sound events.
We're in agreement, it does take a high bit-depth and sampling rate in order to accurately represent the original air pressure levels. But I believe that 96 kHz, and definitely 192 kHz with 24-bits for playback is enough. That's playback, the recording side needs a little more, as I'll describe.

Tape doesn't have infinite resolution, not in the least. A 16-bit sample basically means the original voltage level at that slice of time can be assigned to one of 65536 different levels, from silence to really freaking loud. I don't think a magnetic tape even has that level of precision, or if it does, it doesn't have the accuracy to always store volume level X to magnetic impulse Y. Maybe tape can pull of a SNR of 96 dB. But 24-bit get you 16,777,216 different sample levels, and 146 dB signal-to-noise ratio. There's no way to do that in the analog domain. I don't even think the best ADCs (some of which are now 32-bit: 4-billion levels, 206 dB) can accurately quantify that many levels, thermal variance alone will contribute more noise than that. Oh, dither is only applied when changing bit-rate, it's not part of a normal capture/mix process, unless you're using tools which can handle the bit-rate of capture (then you need to upgrade). Dither should be applied when converting your final mix from 24 to 16 bit.

As for sampling rate one needs to capture, and work at a rate twice that of the target mix. For CD/DVD (44.1 and 48 kHz) that would be 96 kHz, for Blu-ray which is still mostly 48 kHz, but can go as high as 192, that may mean up to 384 kHz. All very doable in a studio. Most people will talk about frequency response, and the Nyquist theorem when dealing with sampling rate. Stating that 48 kHz is enough to be able to reproduce up to 24 kHz, which is above the human hearing threshold. But more samples per second does go hand in hand with another feature, how much detail is preserved in the sample alignment. If recording at 96 kHz, and mixing down to 48, where do half of the samples go? They're averaged into their neighbors. Now imagine a 20 kHz sine wave. Nyquist says this should be able to be reproduced at a sampling rate of 40 kHz. But in fact it can only be stored as a triangle wave, and then also only when aligned exactly with the samples. 90 degrees out of phase, and the up, down, up, down samples of the sine's peaks and troughs become an mid-level DC signal. A low-pass filter should be applied before the antialiasing, of the down-sampling to a lower rate, to remove the high frequency content which will only be encoded as noise anyway. But a 192 kHz final mix, run through a low-pass filter of 24 dB/octave tuned to 24 kHz, and then down-sampled to 96 kHz will have excellent alignment of samples, and audible detail all the way out to 30 kHz.

I got a little off track there, but my point is frequency response can be directly correlated to a media's ability to track any waveform accurately. Since 16-track, 2" tape running at 30 IPS, rolls off at 10 kHz at about 12 dB/octave that can be taken to mean it's magnetic field flux can change at a rate of about 40 kHz. The designers who worked on the CD technology were no dummies (the engineers who only use the top two bits, by mixing for loudness are). CDs do have a resolution of master tape, but need to be mastered at twice that rate when working digitally.

Oh, in case you were wondering. I was a double major in Mathematics, and Computer Science. I've studied recording engineering, one of my friends owns a small recording studio. I've written digital signal processing routines to handle both 2D and 3D data sets (think audio and pictures).

 Quote:
Only high quality converters can even get close to capturing and storing analog properly. This is the "crux of the biscuit" in modern recording today.
But high quality converters can be had for $3k for 16 channels of 24-bit/192 kHz ADC/DACs. Add to that good master clock for $1500. That's not out of the reach of anyone making money doing this stuff.

 Quote:
Almost all quality digital processing are emulations of analog gear. And mostly vintage gear at that. Plug-ins that recreate analog and the inherent anomalies/distortions/saturations etc. are what's happening in recording currently.

Why is that if digital is perfect? Because analog is what the human ear wants to ear. Not just a higher/lower frequency range without tape hiss and no playback degredation. Resolution is all. Digital is getting better with high resolution audio, but remember, analog is infinite resolution. Digital still has to catch up to the primitive "quality" of 1960's recording techniques done on two channel tape by Rudy Van Gelder using just two microphones and a tape deck!
This is where we disagree. Maybe you're so used to hearing distortion, harmonics, and hiss that you think it's pleasing. But when I listen to anything live, it's not there. Why should it be in the recording of live instruments? I want my recordings to be as pristine as possible. That's what sounds natural to me.

 Quote:
You are talking about the consumer end, I'm talking DSD professional two channel (thus far, multichannel DSD is very expensive and not really available) at 64 fs or 2.8 MHz (same as SACD), and 128 fs or 5.6 MHz (professional archiving). DSD can be printed and saved as DSDIFF, DSF or WSD files. PCM audio is 44.1 or 48 kHz at 16/24-bit; also 88.2, 96, 176.4 and 192 kHz at 24-bit. While my daw does 32 bit (and now) 64 bit, it still has to be delivered to the consumer somehow. And yes Blu-Ray is what I'm placing my hopes and fears on as a consumer playback format. And when it comes down between DTS encoding or Dolby encoding, I pick DTS. But YMMV. DSD is being used (experimented with?) right now as a future proof archiving format.
I'm talking about DSD in general. It's single bit indicating whether the analog wave form is headed up, or down makes working with it impossible. Sony not wanting to lose face found one place it works well. You said it, DSD is a great archival format of analog tapes. It's limitation of a slew rate (it can't go from 0 to max in one sample, neither can anything analog), and that it can't be processed (no EQing, no filtering, no nothing, but time alignment between channels) is perfect for taking something which is in the analog domain, and isn't going to be changing anymore.

 Quote:
Whew! I dig your style man, you do have a vast knowledge that I have come to respect on this format, but you have just made my point (that most big budget films, unless CGI/animation, are shot on film) for me.

Let's break this down, shall we?

[films snipped]
Well, you stuck me there. All big budget films have CG effects, that's where the budget goes. But there's plenty of smaller indie films being shot on digital too.

How about this, Peter Jackson shot Lord of the Rings on film, and in the process of adding the CG and effects built the second-largest effects house in the world (Weta, just behind ILM). So one might assume he likes the look of film, he used film again when shooting King Kong. But upon seeing the output of a prototype of the new camera maker, Red, he took two of them, prototypes which only had Rec/Stop functions, to shoot a 15 minute short film called Crossing the Line with Neill Blomkamp. That's why District 9 then used the production version of the same camera.

Go watch Che, maybe that'll be more in line with what you were wanting to see. It was shot with the same cameras as District 9.

Oh, I agree Public Enemies looked awful. It's not a good example of a digitally shot film, but it was the director's intent. He could have used the Viper camera which Fincher used on Zodiac, but went with some Sony junk with an 8mm photo-sensor, because he liked the massive depth of field. I actually didn't even watch the whole picture, because it was such an eye sore.

 Quote:
Anyway, I don't hate digital technology in films at all, when used tastefully and seamlessly it is fantastic, but as far as picture quality (same as audio quality), you cannot photograph anything more stunning than Mario Bava or Fellini or Kubrick or Orson Welles, etc., did way back before computers became the norm.
Again, while their films were beautifully shot, that doesn't have anything to do with their image quality. Do you think that Kubrick or Welles didn't use the sharpest, most advanced technology available when shooting 2001, or War of the Worlds and Citizen Cane?

Again, my eyes don't have a grain sheen on them, detail is not lost in shadows, bright lights don't bloom. Film has a look, but it isn't natural. Digital gets closer to how things really appear, and that's what I like...and I guess you don't.

Last edited by ClubNeon; 01/09/10 11:12 PM.

Pioneer PDP-5020FD, Marantz SR6011
Axiom M5HP, VP160HP, QS8
Sony PS4, surround backs
-Chris