简体   繁体   English

Aria2继续下载

[英]Aria2 continue download

I have an alias for aria2 that downloads from an input file from an ftp server. 我有一个aria2的别名,该别名从ftp服务器的输入文件中下载。 This is how I had it setup. 这就是我的设置方法。

aria2c --max-concurrent-downloads=1 --max-connection-per-server=6 --ftp-user=<user> --ftp-passwd=<password> --dir=/home/<username>/Downloads --input-file=/home/<username>/scripts/downloads.txt

I ran into an issue just now, not sure why as it never happened before, where it wouldn't continue and would try to re-download the files as .1. 我现在遇到一个问题,不确定为什么从未发生过,为什么它不会继续,并会尝试以1.重新下载文件。

So I read the man page and saw this answer I see there is --continue , so I just changed it to 因此,我阅读了手册页,看到了这个答案,我看到这里有--continue ,所以我将其更改为

aria2c --max-concurrent-downloads=1 --max-connection-per-server=6 --continue=true --ftp-user=<user> --ftp-passwd=<password> --dir=/home/<username>/Downloads --input-file=/home/<username>/scripts/downloads.txt

So it works now, but my only issue is that it has to loop through the input file and check each download, making sure they're downloaded, until it finds where it left off. 因此它现在可以使用,但是我唯一的问题是它必须遍历输入文件并检查每个下载,确保已下载它们,直到找到停止的地方。 So for only 4 files downloaded out of 10 (all are under 1gb) it started at 15:51:52 and only found the aria2 file(#5/10) to resume at 16:00:16. 因此,在10个文件中只有4个文件下载(所有文件都在1gb以下),它开始于15:51:52,并且仅发现aria2文件(#5/10)在16:00:16恢复。 Sometimes I'm dealing with 20+ files, or files larger than 1gb, and I'm unsure if that'll also change for the download size itself. 有时我要处理20多个文件或大于1gb的文件,但不确定下载大小本身是否也会改变。 This could make a large delay of like an hour potentially. 这可能会导致大约一个小时的大延迟。 Is there anyway to force it to search for an existing aria2 file in the directory and immediately start there, or do I just have to deal with it or remove finished files from the text file to avoid this? 无论如何,是否有强迫它在目录中搜索现有的aria2文件并立即从那里开始,还是我只需要处理它或从文本文件中删除完成的文件来避免这种情况?

I have 15,000 files and ended up writing a wrapper around aria2c that checks first if a file is already there and that the file size is the same and skips files like this. 我有15,000个文件,最后围绕aria2c编写了一个包装器,该包装器首先检查文件是否已经存在,文件大小是否相同,并跳过这样的文件。 It may be that aria2c is the wrong tool for this job. aria2c可能是此工作的错误工具。 Have you had a look at lftp and it's "mirror" command? 您是否看过lftp及其“ mirror”命令?

aria2c always uses its state .aria2 file if present. aria2c始终使用其状态.aria2文件(如果存在)。 However, it's removed as soon as file is downloaded completely. 但是,一旦文件完全下载,它将被删除。

You ask it to download only 1 file at a time . 您要求它一次仅下载1个文件 It needs some time to check the current file is downloaded completely. 需要一些时间来检查当前文件是否已完全下载。

Try adding --force-save=true to preserve the session file even when download is complete. 尝试添加--force-save=true以保留会话文件,即使下载完成也是如此。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM