[英]How to append multiple files into one file as a whole using perl?
I've multiple chunk files in my Unix system. 我的Unix系统中有多个块文件。 Lets say (file1.csv, file2.csv..., file10.csv).
可以说(file1.csv,file2.csv ...,file10.csv)。 I wanted to append all these files in same order without reading them line by line.
我想以相同的顺序附加所有这些文件,而不是逐行读取它们。
open(my $fh, '<:encoding(UTF-8)', $filename)
or die "Could not open file '$filename' $!";
while (my $row = <$fh>) {
chomp $row;
print "$row\n";
}
The above sample code reads the file line by line. 上面的示例代码逐行读取文件。 But the chunk files I've are having huge data.
但是我拥有的块文件有大量数据。 So I cannot read them line by line neither can I store it in string using
Stringify()
. 因此,我无法逐行读取它们,也无法使用
Stringify()
将其存储在字符串中。
I'm looking for something in Perl similar to the below shell command. 我正在Perl中寻找类似于以下shell命令的内容。
cat file1.csv >> file.csv
Please let me know I can do similar copy (append at the end of target file) in Perl. 请让我知道我可以在Perl中进行类似的复制(附加在目标文件的末尾)。 Thanks!
谢谢!
As Shawn mentioned above, you can read
a block at a time. 正如Shawn上面提到的,您可以一次
read
一个块。 You can experiment with different block sizes to see what is fastest on your system. 您可以尝试不同的块大小,以查看系统上最快的块。 For example:
例如:
my $BLOCK_SIZE = 1024 * 1024 * 2; # try reading 2 megabytes at a time
my $outfile = "output.csv";
open(OUT, ">>", $outfile) or die "Can't append to output file $outfile: $!\n";
foreach my $input_file (@input_files) {
open(IN, "<", $input_file) or die "Can't read from input file $input_file: $!\n";
my $data;
my $blocks_read = 0;
while (read(IN, $data, $BLOCK_SIZE)) {
print OUT $data;
$blocks_read++;
}
close IN;
print "Read $blocks_read blocks from $input_file\n";
}
close OUT;
}
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.