Originally Posted By: ClubNeon
 Originally Posted By: chesseroo
Just a general question.
I've had some discussions with some rather brilliant computer engineers and programmers (you know, the kind of guys who learned how to build circuit boards at the age of 12) simply stating 'any compression format can be assumed to induce errors by its very nature of how they compress data'.

I don't know what those guys were doing, but at 16 I was designing lossless packing algorithms for 2D graphics. I know for a fact that RLE (run length encoding) and dictionary pattern matching are lossless. They are also blindingly obvious, and a 16 year old could come up with them on his own.

Lossless audio is somewhat more complex, but I still have no doubt that it can be done without introducing error. One of the usual methods is to write an prediction algorithm which tries to guess what value will come next based upon previous. If it's right, then nothing needs to be stored. If it's wrong, then the difference between the predicted value and the actual value is stored. The error is the data.

Properly implemented, there are many ways to reduce the redundancy in data, that when undone presents an exact copy of the original. One thing, in packed form damage to the data will be more extensive when unpacked. But that's not a failure of the compression, but of the transmission/storage medium. And why we have error checking and correcting.

Unfortunately i cannot reproduce the conversation as i was merely a spectator and the language was beyond my knowledge base, but it does sound similar to what you are speaking of; the concept of prediction algorithms and how effective that is with the complexity of the audio range in a digital recording.

Does a 450.1 Hz tone get assigned the same compressed digital tag as a 450.001 Hz tone because they are 'close enough'?
Does a 560 Hz tone 1 second long get the same compressed hexadecimal digital tag as the 561 Hz note only 0.001 seconds long that follows it because the person writing the compression algorithm would 'know' that any note less than x seconds could not be interpreted by the human brain anyway?

This is a terrible example i realize, but if i understand their conversation correctly, that was the gist of the argument, that data could still be lost because of the predictive nature of the compression algorithms.


"Those who preach the myths of audio are ignorant of truth."