Axiom Home Page
Posted By: Kruncher Downloading HTML files - 02/17/11 12:58 AM
OK, MS operating system gurus, I've got an odd one for you.

I've Google'd until I'm blue in the face, and have read all manner of seemingly related stuff that naturally turned out to be unrelated.

Here's the deal. I'm trying to do some number Krunchin' by scraping data from web sites. I was supplied two with html files which were downloaded from the same web page: one using IE8 on Win7, the other using IE8 on XP.

The XP derived file is about 30k. The Win7 file is about 50k, and is full of stuff that makes it practically impossible to get the data out for mere mortals.

Does anyone have any idea why they're different, and more importantly, how to get the usable XP format out of a Win 7 box, as that's key to keeping this analysis system running in the future.

I've spent hours on this over the last few days, and would really appreciate any pointers to concrete answers, or of course, answers themselves.

Switch browsers doesn't seem to help, BTW. Tried that. Several times.
Posted By: ClubNeon Re: Downloading HTML files - 02/17/11 01:04 AM
When you're in the Save As... dialog box, change the Save as type to: Webpage, HTML only.
Posted By: Ken.C Re: Downloading HTML files - 02/17/11 01:04 AM
What command was used to get the file out of the browser? Edit->View Source or File->Save? If Save, which of the two html options?
Posted By: Kruncher Re: Downloading HTML files - 02/17/11 01:10 AM
File, Save as, Webpage HTML only. I believe.

I got two HTM files, no other folders.
Posted By: SirQuack Re: Downloading HTML files - 02/17/11 01:22 AM
Often for file type you will have an option for HTML Webpage Complete, or just HTML Only. The complete one will have more content. You should have a drop down when you do a save-as to select various options like mht, html, etc.
Posted By: Kruncher Re: Downloading HTML files - 02/17/11 03:38 AM
I've confirmed the save process with the sender:
File, Save as, Webpage HTML only.

Further, my own testing on a Win 7 system generated a Webpage Complete file of 59,033 bytes while the "HTML only" save version was 59,258 bytes for the same page. No, that's not a typo or reversed values.

This is the strangest thing...
Posted By: SirQuack Re: Downloading HTML files - 02/17/11 04:02 AM
Not sure this would help, but have you tried the MHT option?

"Saving in this format allows users to save a web page and its resources as a single MHTML file called a "Web Archive", where all images and linked files will be saved as a single entity."
Posted By: Kruncher Re: Downloading HTML files - 02/17/11 04:16 AM
Unfortunately that format is not really an option Randy.

The end goal is to use software either from Iopus or similar in function to it, to scrape data from literally hundreds of web pages to create a mini data mart. Other software will be used to capture the specific data values from the individual downloaded pages, and mht doesn't play nicely with that. For example, there will be a title of "2008-09" that appears in the browser, but the string "2008-09" can't be found in the mht file.
Posted By: pmbuko Re: Downloading HTML files - 02/17/11 04:24 AM
MHT would be more or less useless for scraping data. What you want is a simple file readable by any text editor.

Have you tried using Firefox? It has a very convenient option in the Save As box that allows you to save the web page as a text file. This will strip out all the scripts and non-visible portions of the web page and give you only what's actually displayed on the screen. For an example page I just visited, the file sizes for raw html (full page source code) and the text file version were 229K and 82K, respectively.

May I ask what you want to use to scrape these files?
Posted By: Kruncher Re: Downloading HTML files - 02/17/11 04:34 AM
I tried FF with the same poor result a few days ago Peter. I know that the browswer must play a role, after all it's the software that's being used to create the file.

But when I used FF 3.6 to download from http://some-arbitrary-site.com, I got a larger .htm file using a Win 7 system than I did when I used FF 3.6 **for the exact same page** on my Vista box. WTF? Seriously! I know... I didn't buy it either when it was presented to me as the challenge in the first place.

As to the data scraper to be used on the downloaded files, Monarch is the tool of choice.

EDIT: Hold that thought. I didn't realize that I'd been using IE7 on my Vista box. Still, why would downloading the HTML file with one version of a browser, any browser (but in this case say IE), be different? Shouldn't the HTML be dictacted purely by the source - the site in question? Or are content management systems sending out entirely different HTML based on the browser in use at the client side? I think I'm getting warm now...
Posted By: pmbuko Re: Downloading HTML files - 02/17/11 04:57 AM
To avoid any sort of browser "infection", you could try curl for Windows. It's built in on most UNIX/linux OSes and very easy to use. E.g., from the command prompt:

curl http://www.somesite.com/stuff.html -o somesite-stuff.html

This would download the stuff.html file from somesite.com and save it locally as somesite-stuff.html.

Since it's a command-line utility, you could batch a bunch of sites together.
Posted By: Kruncher Re: Downloading HTML files - 02/17/11 05:02 AM
The plot thickens. The fellow who approached me with this in the first place maintains that he was using IE8 on both a Win 7 box and an XP box. Good, usable results files downloaded with the XP system, and bloated unmanageable files on the Win 7 box.

He bought a Win 7 box to progress with his project, but instead it's stopped the project dead in its tracks. He's still got the XP box, but the future is with Win 7, so that's what he'd prefer to use. Understandable, I believe.
Posted By: Kruncher Re: Downloading HTML files - 02/17/11 05:07 AM
That sounds like a great plan Peter. Thanks very much for that information. Really top notch.

I left my AIX/Unix days behind me in the '90s, and it's easy to forget just how useful utilities built for those OS's are.

I'll pass that along and will try to post back his feedback here this week.
Posted By: ClubNeon Re: Downloading HTML files - 02/17/11 05:25 AM
wget may be easier to use than curl. It's my tool of choice.
Posted By: pmbuko Re: Downloading HTML files - 02/17/11 07:00 PM
True. wget is a bit more powerful and I'd use it instead if you need to grab a bunch of different files from a web server and want to filter out anything other than .htm or .html files. Here's an example I used recently:

I have a server that holds install and configuration files for the linux desktops I deploy and manage. In one of my automated installs, I need to grab the latest NVIDIA driver from my install server. The name of this file is not constant, but it always ends with a .run extension, so I use the following command to grab it:

wget -r -nH -np -nd -A run http://yum1:8080/nvidia/

The options basically say "look at all the files in the nvidia directory on that web server but only grab the ones that have a '.run' file extension." This works since I only ever keep one in there.
Posted By: ClubNeon Re: Downloading HTML files - 02/17/11 07:54 PM
wget can also be as simple as:

wget "http://www.axiomaudio.com/"

That'll create a file named "index.html" in your current directory. So that is easier than curl for getting a single document. You have at least tell curl what name to save the file with, or it'll just write to the screen.

Of course you can tell wget to save with a different name by just giving it the "-o filename.html" option too.
Posted By: pmbuko Re: Downloading HTML files - 02/17/11 08:16 PM
curl -O http://the.url.com will also save an index.html (or whatever default file the server gives you) in your current directory.
Posted By: ClubNeon Re: Downloading HTML files - 02/17/11 08:18 PM
Still too much typing. laugh
Posted By: pmbuko Re: Downloading HTML files - 02/17/11 10:08 PM
A spurious criticism for a board regular to make.
Posted By: ClubNeon Re: Downloading HTML files - 02/17/11 10:50 PM
If I was typing -O every time I wanted to save a file, how would I have time to spend here?
Posted By: Ken.C Re: Downloading HTML files - 02/17/11 10:52 PM
Isn't that why you alias curl to curl -O?
Posted By: ClubNeon Re: Downloading HTML files - 02/17/11 10:56 PM
I don't even have curl on my machine. Keeping the disk space free for other things. I think having that alias around would also be a waste of RAM.
Posted By: Ken.C Re: Downloading HTML files - 02/17/11 10:59 PM
Yeah, we know your computer is quite limited on both disk space and RAM.
Posted By: ClubNeon Re: Downloading HTML files - 02/17/11 11:01 PM
I'm like the billionaire that picks up pennies off the ground.
Posted By: BobKay Re: Downloading HTML files - 02/17/11 11:05 PM
Originally Posted By: ClubNeon
I'm like the billionaire that picks up pennies off the ground.

To whip them at poor people.
Posted By: pmbuko Re: Downloading HTML files - 02/18/11 02:17 AM
Lemme guess... you've compiled your own kernel to save space, too.
Posted By: ClubNeon Re: Downloading HTML files - 02/18/11 03:03 AM
Kernel? I compiled my whole operating system, without -g.
© Axiom Message Boards