簡體   English   中英

重復性增量備份耗時太長

[英]Duplicity incremental backup taking too long

我有雙重運行每日增量備份到S3。 大約37 GiB。

在第一個月左右,它沒問題。 它曾經在大約1小時內完成。 然后它開始花費太長時間來完成任務。 現在,在我輸入時,它仍在運行7小時前開始的每日備份。

我正在運行兩個命令 ,首先是備份然后清理:

duplicity --full-if-older-than 1M LOCAL.SOURCE S3.DEST --volsize 666 --verbosity 8
duplicity remove-older-than 2M S3.DEST

日志

Temp has 54774476800 available, backup will use approx 907857100.

所以臨時有足夠的空間,很好。 然后它從這開始......

Copying duplicity-full-signatures.20161107T090303Z.sigtar.gpg to local cache.
Deleting /tmp/duplicity-ipmrKr-tempdir/mktemp-13tylb-2
Deleting /tmp/duplicity-ipmrKr-tempdir/mktemp-NanCxQ-3
[...]
Copying duplicity-inc.20161110T095624Z.to.20161111T103456Z.manifest.gpg to local cache.
Deleting /tmp/duplicity-ipmrKr-tempdir/mktemp-VQU2zx-30
Deleting /tmp/duplicity-ipmrKr-tempdir/mktemp-4Idklo-31
[...]

這一直持續到今天,每個文件需要很長時間。 並繼續這...

Ignoring incremental Backupset (start_time: Thu Nov 10 09:56:24 2016; needed: Mon Nov  7 09:03:03 2016)
Ignoring incremental Backupset (start_time: Thu Nov 10 09:56:24 2016; needed: Wed Nov  9 18:09:07 2016)
Added incremental Backupset (start_time: Thu Nov 10 09:56:24 2016 / end_time: Fri Nov 11 10:34:56 2016)

經過很長一段時間...

Warning, found incomplete backup sets, probably left from aborted session
Last full backup date: Sun Mar 12 09:54:00 2017
Collection Status
-----------------
Connecting with backend: BackendWrapper
Archive dir: /home/user/.cache/duplicity/700b5f90ee4a620e649334f96747bd08

Found 6 secondary backup chains.
Secondary chain 1 of 6:
-------------------------
Chain start time: Mon Nov  7 09:03:03 2016
Chain end time: Mon Nov  7 09:03:03 2016
Number of contained backup sets: 1
Total number of contained volumes: 2
Type of backup set:                            Time:      Num volumes:
               Full         Mon Nov  7 09:03:03 2016                 2
-------------------------
Secondary chain 2 of 6:
-------------------------
Chain start time: Wed Nov  9 18:09:07 2016
Chain end time: Wed Nov  9 18:09:07 2016
Number of contained backup sets: 1
Total number of contained volumes: 11
Type of backup set:                            Time:      Num volumes:
               Full         Wed Nov  9 18:09:07 2016                11
-------------------------

Secondary chain 3 of 6:
-------------------------
Chain start time: Thu Nov 10 09:56:24 2016
Chain end time: Sat Dec 10 09:44:31 2016
Number of contained backup sets: 31
Total number of contained volumes: 41
Type of backup set:                            Time:      Num volumes:
               Full         Thu Nov 10 09:56:24 2016                11
        Incremental         Fri Nov 11 10:34:56 2016                 1
        Incremental         Sat Nov 12 09:59:47 2016                 1
        Incremental         Sun Nov 13 09:57:15 2016                 1
        Incremental         Mon Nov 14 09:48:31 2016                 1
        [...]

列出所有鏈后:

Also found 0 backup sets not part of any chain, and 1 incomplete backup set.
These may be deleted by running duplicity with the "cleanup" command.

這只是備份部分。 這需要花費數小時才能完成,只需10分鍾即可將37 GiB上傳到S3。

ElapsedTime 639.59 (10 minutes 39.59 seconds)
SourceFiles 288
SourceFileSize 40370795351 (37.6 GB)

然后是清理 ,這給了我這個:

Cleaning up
Local and Remote metadata are synchronized, no sync needed.
Warning, found incomplete backup sets, probably left from aborted session
Last full backup date: Sun Mar 12 09:54:00 2017
There are backup set(s) at time(s):
Tue Jan 10 09:58:05 2017
Wed Jan 11 09:54:03 2017
Thu Jan 12 09:56:42 2017
Fri Jan 13 10:05:05 2017
Sat Jan 14 10:24:54 2017
Sun Jan 15 09:49:31 2017
Mon Jan 16 09:39:41 2017
Tue Jan 17 09:59:05 2017
Wed Jan 18 09:59:56 2017
Thu Jan 19 10:01:51 2017
Fri Jan 20 09:35:30 2017
Sat Jan 21 09:53:26 2017
Sun Jan 22 09:48:57 2017
Mon Jan 23 09:38:45 2017
Tue Jan 24 09:54:29 2017
Which can't be deleted because newer sets depend on them.
Found old backup chains at the following times:
Mon Nov  7 09:03:03 2016
Wed Nov  9 18:09:07 2016
Sat Dec 10 09:44:31 2016
Mon Jan  9 10:04:51 2017
Rerun command with --force option to actually delete.

我發現了這個問題。 由於一個問題,我按照這個答案 ,並將此代碼添加到我的腳本:

rm -rf ~/.cache/deja-dup/*
rm -rf ~/.cache/duplicity/*

這應該是一次性的事情,因為隨機的bug兩面都有。 但答案沒有提到這一點。 因此,每天腳本在同步緩存后立即刪除緩存,然后,第二天它必須再次下載整個緩存。

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM