简体   繁体   中英

Reading multiple lines from a text file in a loop at bash

I am trying to write an archiver script in bash but there are too many files. They are realy too many.. about 1 million files.

I planned that, I will create the list of files with;

cd /path/to/log/directory/
find . -type f > logfilelist.txt 

And then, I will tar and zip them with;

tar -cvf logarchive.tar $(cat logfilelist.txt) 
gzip logarchive.tar

But, because of returning too many lines from cat, the tar gives "Arg list too long" error.

So I tought that, if I can read the file in a loop, I can archive them piece by pece by using append mode of tar. But making a million-line loop is not logical. So, can I read the list file with multiple lines like this;

tar -cf logarchive.tar $(first 50000 lines of logfilelist.txt) 

for loop
do
tar -rvf logarchive.tar $(2nd,3rd,...,99th,100th 50000 lines of logfilelist.txt)  
done

is it possible to cat multiple lines from a file?

You can use mapfile which allows you to easily read lines and call a function after every N:

#!/bin/bash
archive() {
  tar -rvf logarchive.tar "${lines[@]}"
  lines=()
}
mapfile -t -c 50000 -C archive lines < logfilelist.txt
archive # Also call it for the last <50,000 lines

PS: This is an XYProblem in place of "How do I add a large number of files from a list to a tar archive?", and the answer to that is instead a much easier and better

tar -cf logarchive.tar --files-from logfilelist.txt 

You use xargs. This script add tar archive by row.

# append and create tar.
cat logfilelist.txt | xargs -n 1 -I {} tar -rvf logarchive.tar {}
# gzip tar achive.
gzip logarchive.tar

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM