Downloading HTML files

Posted by: Kruncher

Downloading HTML files - 02/16/11 07:58 PM

OK, MS operating system gurus, I've got an odd one for you.

I've Google'd until I'm blue in the face, and have read all manner of seemingly related stuff that naturally turned out to be unrelated.

Here's the deal. I'm trying to do some number Krunchin' by scraping data from web sites. I was supplied two with html files which were downloaded from the same web page: one using IE8 on Win7, the other using IE8 on XP.

The XP derived file is about 30k. The Win7 file is about 50k, and is full of stuff that makes it practically impossible to get the data out for mere mortals.

Does anyone have any idea why they're different, and more importantly, how to get the usable XP format out of a Win 7 box, as that's key to keeping this analysis system running in the future.

I've spent hours on this over the last few days, and would really appreciate any pointers to concrete answers, or of course, answers themselves.

Switch browsers doesn't seem to help, BTW. Tried that. Several times.
Posted by: ClubNeon

Re: Downloading HTML files - 02/16/11 08:04 PM

When you're in the Save As... dialog box, change the Save as type to: Webpage, HTML only.
Posted by: Ken.C

Re: Downloading HTML files - 02/16/11 08:04 PM

What command was used to get the file out of the browser? Edit->View Source or File->Save? If Save, which of the two html options?
Posted by: Kruncher

Re: Downloading HTML files - 02/16/11 08:10 PM

File, Save as, Webpage HTML only. I believe.

I got two HTM files, no other folders.
Posted by: SirQuack

Re: Downloading HTML files - 02/16/11 08:22 PM

Often for file type you will have an option for HTML Webpage Complete, or just HTML Only. The complete one will have more content. You should have a drop down when you do a save-as to select various options like mht, html, etc.
Posted by: Kruncher

Re: Downloading HTML files - 02/16/11 10:38 PM

I've confirmed the save process with the sender:
File, Save as, Webpage HTML only.

Further, my own testing on a Win 7 system generated a Webpage Complete file of 59,033 bytes while the "HTML only" save version was 59,258 bytes for the same page. No, that's not a typo or reversed values.

This is the strangest thing...
Posted by: SirQuack

Re: Downloading HTML files - 02/16/11 11:02 PM

Not sure this would help, but have you tried the MHT option?

"Saving in this format allows users to save a web page and its resources as a single MHTML file called a "Web Archive", where all images and linked files will be saved as a single entity."
Posted by: Kruncher

Re: Downloading HTML files - 02/16/11 11:16 PM

Unfortunately that format is not really an option Randy.

The end goal is to use software either from Iopus or similar in function to it, to scrape data from literally hundreds of web pages to create a mini data mart. Other software will be used to capture the specific data values from the individual downloaded pages, and mht doesn't play nicely with that. For example, there will be a title of "2008-09" that appears in the browser, but the string "2008-09" can't be found in the mht file.
Posted by: pmbuko

Re: Downloading HTML files - 02/16/11 11:24 PM

MHT would be more or less useless for scraping data. What you want is a simple file readable by any text editor.

Have you tried using Firefox? It has a very convenient option in the Save As box that allows you to save the web page as a text file. This will strip out all the scripts and non-visible portions of the web page and give you only what's actually displayed on the screen. For an example page I just visited, the file sizes for raw html (full page source code) and the text file version were 229K and 82K, respectively.

May I ask what you want to use to scrape these files?
Posted by: Kruncher

Re: Downloading HTML files - 02/16/11 11:34 PM

I tried FF with the same poor result a few days ago Peter. I know that the browswer must play a role, after all it's the software that's being used to create the file.

But when I used FF 3.6 to download from http://some-arbitrary-site.com, I got a larger .htm file using a Win 7 system than I did when I used FF 3.6 **for the exact same page** on my Vista box. WTF? Seriously! I know... I didn't buy it either when it was presented to me as the challenge in the first place.

As to the data scraper to be used on the downloaded files, Monarch is the tool of choice.

EDIT: Hold that thought. I didn't realize that I'd been using IE7 on my Vista box. Still, why would downloading the HTML file with one version of a browser, any browser (but in this case say IE), be different? Shouldn't the HTML be dictacted purely by the source - the site in question? Or are content management systems sending out entirely different HTML based on the browser in use at the client side? I think I'm getting warm now...
Posted by: pmbuko

Re: Downloading HTML files - 02/16/11 11:57 PM

To avoid any sort of browser "infection", you could try curl for Windows. It's built in on most UNIX/linux OSes and very easy to use. E.g., from the command prompt:

curl http://www.somesite.com/stuff.html -o somesite-stuff.html

This would download the stuff.html file from somesite.com and save it locally as somesite-stuff.html.

Since it's a command-line utility, you could batch a bunch of sites together.
Posted by: Kruncher

Re: Downloading HTML files - 02/17/11 12:02 AM

The plot thickens. The fellow who approached me with this in the first place maintains that he was using IE8 on both a Win 7 box and an XP box. Good, usable results files downloaded with the XP system, and bloated unmanageable files on the Win 7 box.

He bought a Win 7 box to progress with his project, but instead it's stopped the project dead in its tracks. He's still got the XP box, but the future is with Win 7, so that's what he'd prefer to use. Understandable, I believe.
Posted by: Kruncher

Re: Downloading HTML files - 02/17/11 12:07 AM

That sounds like a great plan Peter. Thanks very much for that information. Really top notch.

I left my AIX/Unix days behind me in the '90s, and it's easy to forget just how useful utilities built for those OS's are.

I'll pass that along and will try to post back his feedback here this week.
Posted by: ClubNeon

Re: Downloading HTML files - 02/17/11 12:25 AM

wget may be easier to use than curl. It's my tool of choice.
Posted by: pmbuko

Re: Downloading HTML files - 02/17/11 02:00 PM

True. wget is a bit more powerful and I'd use it instead if you need to grab a bunch of different files from a web server and want to filter out anything other than .htm or .html files. Here's an example I used recently:

I have a server that holds install and configuration files for the linux desktops I deploy and manage. In one of my automated installs, I need to grab the latest NVIDIA driver from my install server. The name of this file is not constant, but it always ends with a .run extension, so I use the following command to grab it:

wget -r -nH -np -nd -A run http://yum1:8080/nvidia/

The options basically say "look at all the files in the nvidia directory on that web server but only grab the ones that have a '.run' file extension." This works since I only ever keep one in there.
Posted by: ClubNeon

Re: Downloading HTML files - 02/17/11 02:54 PM

wget can also be as simple as:

wget "http://www.axiomaudio.com/"

That'll create a file named "index.html" in your current directory. So that is easier than curl for getting a single document. You have at least tell curl what name to save the file with, or it'll just write to the screen.

Of course you can tell wget to save with a different name by just giving it the "-o filename.html" option too.
Posted by: pmbuko

Re: Downloading HTML files - 02/17/11 03:16 PM

curl -O http://the.url.com will also save an index.html (or whatever default file the server gives you) in your current directory.
Posted by: ClubNeon

Re: Downloading HTML files - 02/17/11 03:18 PM

Still too much typing. laugh
Posted by: pmbuko

Re: Downloading HTML files - 02/17/11 05:08 PM

A spurious criticism for a board regular to make.
Posted by: ClubNeon

Re: Downloading HTML files - 02/17/11 05:50 PM

If I was typing -O every time I wanted to save a file, how would I have time to spend here?
Posted by: Ken.C

Re: Downloading HTML files - 02/17/11 05:52 PM

Isn't that why you alias curl to curl -O?
Posted by: ClubNeon

Re: Downloading HTML files - 02/17/11 05:56 PM

I don't even have curl on my machine. Keeping the disk space free for other things. I think having that alias around would also be a waste of RAM.
Posted by: Ken.C

Re: Downloading HTML files - 02/17/11 05:59 PM

Yeah, we know your computer is quite limited on both disk space and RAM.
Posted by: ClubNeon

Re: Downloading HTML files - 02/17/11 06:01 PM

I'm like the billionaire that picks up pennies off the ground.
Posted by: BobKay

Re: Downloading HTML files - 02/17/11 06:05 PM

Originally Posted By: ClubNeon
I'm like the billionaire that picks up pennies off the ground.

To whip them at poor people.
Posted by: pmbuko

Re: Downloading HTML files - 02/17/11 09:17 PM

Lemme guess... you've compiled your own kernel to save space, too.
Posted by: ClubNeon

Re: Downloading HTML files - 02/17/11 10:03 PM

Kernel? I compiled my whole operating system, without -g.