简体   繁体   中英

wget downloads index.html unnecessarily and halts batch script (Windows)

I am having trouble with wget64 on Windows :

@echo OFF
FOR /L %%i in (1, 1, 9999) DO (
cls
echo Downloading file %%i
wget64.exe -e robots=off --progress=bar --show-progress -r -np -nd -nc -HDfilepi.com --content-disposition -a wget.log ebooks.info/book/%%i/
)

wget will download index.html (which I feel is unnecessary), then it proceeds to the hosted file and downloads it if the file does not exist on the destination, but will fail to retrieve the index.html of the next book and start the next download.

Is it really necessary to download index.html and if that is the case, how can I tell wget to erase and download the new one every time?

Disclaimer: I am only asking about the specific behaviour of wget, I am not asking to help in the download script nor do I condone illegally downloading files.

I think you may be overthinking this. Assuming that all the other files wget is retrieving are the correct ones, since you're using a FOR loop anyway, why not just delete index.html on each iteration like so:

@echo OFF
FOR /L %%i in (1, 1, 9999) DO (
cls
echo Downloading file %%i
wget64.exe -e robots=off --progress=bar --show-progress -r -np -nd -nc -HDfilepi.com --content-disposition -a wget.log ebooks.info/book/%%i/
if exist index.html del index.html
)

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM