Jump to content

Recommended Posts

Posted

Back in the good old days, if I wanted to copy 100 files from one directory to another I could set up a .bat file to do it.

Is there an easy way to copy files from the internet to my computer? IE

copy http://www.domain.com/a001.doc c:/Docs

copy http://www.domain.com/a002.doc c:/Docs

copy http://www.domain.com/a003.doc c:/Docs

I'm pretty sure that the old bat file can't do that, but can VBscript do it or some other easy and free utility?

Thanks.

Posted

Have you got ftp access to the directory you want to suck files from ? If so, you could try a free FTP program like Filezilla that will let you transfer lots of things without effort.

Otherwise I dunno. There are website slurping tools but I don't mess with those.

Posted

Here's a vbscript I found on the internet:

x=0

Do While x <> 1

set objIE = CreateObject("InternetExplorer.Application")

objIE.visible=false

objIE.navigate2("http://www.url.com/images/file.jpg")

Do While objIE.busy=True

wscript.sleep 60

Loop

call objIE.Document.ExecWB("saveas",2,"C:\temp\file.jpg")

objIE.Quit

Wscript.Sleep 300000

Loop

Posted

A standard command-line tool for unix, wget, is apparently also available for windows. This tool can do simple things like "wget URL" to download a file to the current directory where you run the command (say, from a batch script), as well as complex things like recursively downloading an entire website and adjusting the links so you can browse it locally on your disk without keeping a network connection.

The main site for the wget tool is http://wget.sunsite.dk/ where you can learn more about the tool.

They have links there to where you can get a windows version, including http://space.tin.it/computer/hherold/

Posted

Unfortunately, some servers block wget! In particular, the Google cache blocks wget. You may be able to get round this by faking the identity of the tool. I never worked out how, and ended up copying thousands of files manually. If you have to resort to this, one trick is to visit them all and then save them from your Internet Explorer cache. However, you must watch out for it filling up and newly saved files replacing old files.

In part, it depends on what you want to do. The IE 'save as' command saves subordinate files, but you may not want multiple copies of boilerplate files. Also, in my experience, IE 'save as' can badly mangle subordinate Javascript files.

  • 2 weeks later...
Posted (edited)
Here's a vbscript I found on the internet:

x=0

Do While x <> 1

   set objIE = CreateObject("InternetExplorer.Application")

   objIE.visible=false

   objIE.navigate2("http://www.url.com/images/file.jpg")

   Do While objIE.busy=True

      wscript.sleep 60

   Loop

   call objIE.Document.ExecWB("saveas",2,"C:\temp\file.jpg")

   objIE.Quit

   Wscript.Sleep 300000

Loop

I tried the script above but unfortunately it doesn't work. I did also find this code:

URL="http://www.url.com/xxx.yyy"
TEMPPATH = "C:\Temp\"
OUTPUTFILE = TEMPPATH & Mid(URL, InStrRev(URL, "/") + 1)

const ForReading = 1 , ForWriting = 2 , ForAppending = 8 
Set fsoMain = CreateObject("Scripting.FileSystemObject")
Set objHTTP = CreateObject("WinHttp.WinHttpRequest.5.1")
Set fsResults = fsoMain.OpenTextFile(OUTPUTFILE,ForWriting, True)
Call objHTTP.Open("GET", url, FALSE)
objHTTP.Send

for i = 1 to lenb(objHTTP.ResponseBody)
  fsResults.write Chr(Ascb(midb(objHTTP.ResponseBody,i,1)))
Next
fsResults.Close()

This code works but it is extremely slow.

I've just downloaded "wget" so I'll have a play with it to see if it will do the job for me.

Thanks.

Dan

Edited by patya dan

Create an account or sign in to comment

You need to be a member in order to leave a comment

Create an account

Sign up for a new account in our community. It's easy!

Register a new account

Sign in

Already have an account? Sign in here.

Sign In Now
  • Recently Browsing   0 members

    • No registered users viewing this page.



×
×
  • Create New...