What the salesman was trying to get across is that 1080i contains only 50% native information. Why he chose to say 520P is confusing to me. He obviously doesn’t have a full understanding, but he is somewhat correct – stressing “somewhat”.

1080P is 1920 X 1080 pixels = 2,073,600 pixels of data displayed
1080i is the same resolution with the same pixel content, but missing 50% native data. (1,036,800 pixels of native data)
720P can be different resolutions, but contains 921,600 pixels of native data (IE: 1280 X 720)

The number of pixels is what we see as information on a display. Obviously, as that number increases, those little dots of information because smaller and more tightly packed onto the screen, provided the screen size remains the same.

The difference between 1080P and 720P, as being viewed by the observer, is all vertical rows of information are actual data, but the pixels are larger with the 720 display than the 1080 display (display size being equal). Depending on how close the observer is to the display, or the actual size of the display, this may be a mute point as the observer may or may not be able to see a difference. As you get closer to the display you will be able to make out actual pixels sooner with the 720 display than you will the 1080 display. – Which is why you always see charts and graphs referencing viewing distances, display size and recommended resolutions. In other words, if you sit ten feet away from a 36” display, you will not see an improvement if you go with 1080 verses 720, unless you watch TV with a pair of binoculars.

Without going into too much detail describing the interlacing possess……

Film is shot at 24 frames per second and video is shot at 30 fps (NTSC) and 25 fps (pal), all at various progressive resolutions. HD digital cameras shoot at up to 4K pixels (and maybe higher). I don’t recall what real 35mm film is in comparison to digital, but it’s significantly higher than anything we’ll ever see on a digital display. Well maybe someday, but not anytime soon….. However, 35mm film is converted to digital HD/DVD and BR content at 1080P/24. That’s as good as it gets, currently. HD/TV is broadcast at 1080i or 720P; both at 60 fps NSTC or 50 fps PAL. Each of these broadcast feeds are highly processed to convert from 1080P/24-30 to 1080i/720P – 50/60 fps. (this gets confusing real quick as film is sometimes transferred to video and you start to see 50 and 60 fps formats being referred to)

Interlaced data contain roughly half the original native data as progressive data (I say roughly because there may be some scaling going on to fit an image to a display, which is another topic). In a nutshell, to get to an interlaced resolution, vertical lines of native information in each frame (referred to as fields) are removed and replaced by digital ‘flags’ representing the deleted data. A deinterlacer then identifies these flags and attempts to reconstruct the deleted data (fields) to get a full progressive output. Unfortunately, sometimes these flags get lost in transfer or transmission and the deinterlacer can not properly reconstruct the progressive output because it looses sync with the cadence that the interlacing process used. The manor in which the interlacing process is done varies. I won’t go into that, as a quick Google search will take you to numerous web sights describing the process with graphics.

It doesn’t really matter how all this works, but it is very important to know that some video processors de-interlace better than others, or in other words, reconstruct original data better or worse than others. Then there’s film verses video which is recorded and transferred with different frame rates and interlaced with different cadences, and the deinterlacer needs to identify if it is film or video (like credits, which are video mixed in with film or video commercials mixed in with film). What makes a true native data progressive output appear smoother than an interlaced output is the fact that all data you see is real data, and not the deinterlacer’s idea of what to do to reconstruct missing data. Sometimes the video processor will screw up and drop fields or frames because it looses sync with the interlacing cadence and you see tearing, judder and artifacts. These are amplified with sports and video as there tends to be more left to right movement across a display (remember that vertical rows are eliminated and reconstructed, not horizontal).

So what’s better than relying on a VP to reconstruct deleted data? Simple, display native progressive data. It really doesn’t get any simpler than that. A display that will accept, and convert 1080P/24 to 720P will have a smoother, judder free picture without tearing and artifacts than one that displays de interlaced 1080i because it will not eliminate entire rows of native data. This however is not the general contention and source of argument, and folks need to keep the subject of the argument on track. The argument always ends up being, “my 1080i TV looks better than my uncles 720P TV or vice versa”. That is not comparing apples to apples. It’s comparing apples to bacon.