[英]Running multiple commands in bash and creating new directories and moving csv files into the new directory
I have several pcap files on which I would like to apply the same commands on (these are Argus commands a network flow tool) 我有几个要在其上应用相同命令的pcap文件(这些是网络流量工具Argus命令)
Place all new .csv files into the new directory 将所有新的.csv文件放入新目录
for file in *.pcap do argus -r *.pcap -w packet.argus #Run argus to get the flow volumn (totalbytes) and the flow duration (seconds) ra -r packet.argus -s bytes dur > flow_vol_dur.csv #Run argus to get the source and destination ports, merge both columns together and count how many occurances racluster -r packet.argus -n -s sport dport > ports.csv ra -r packet.argus -s stime ltime sport dport - dst port 53 > DNS.csv ra -r packet.argus -s stime ltime sport dport - dst port 123 > NTP.csv dir=$(echo ${file} | awk -F. '{print $1}' OFS=.) mkdir $dir #Move all newly created .csv files to the new directory mv $file*.csv $dir done
I think naming the new directory is incorrect as I only want part of the name of the pcap file 我认为命名新目录是不正确的,因为我只想要部分pcap文件的名称
I am sure there is a better way of running the commands especially this one 我敢肯定,有一种更好的方式来运行命令,尤其是这一种
ra -r packet.argus -s stime ltime sport dport - dst port 53 > DNS.csv
ra -r packet.argus -s stime ltime sport dport - dst port 123 > NTP.csv
when there is only slight change in the command I would like to know if there is a easier format of running these commands 当命令中的更改很小时,我想知道是否有一种更简单的格式来运行这些命令
Is there a way in bash to merge the columns from the different csv files into single csv file bash中是否有一种方法可以将来自不同csv文件的列合并为单个csv文件
e.g
file1.csv
A,B
file2.csv
C,D
Desired output.csv
A,B,C,D
I have tried join and does not work is there any other bash command that will work? 我试过加入但不起作用,是否还有其他bash命令会起作用?
It looks like most of your commands don't need to run once per file, so if you change the order you can save some runtime: 看来大多数命令不需要每个文件运行一次,因此,如果更改顺序可以节省一些运行时间:
#!/usr/bin/env bash
argus -r *.pcap -w packet.argus
# Run argus to get the flow volumn (totalbytes) and the flow duration (seconds)
ra -r packet.argus -s bytes dur > flow_vol_dur.csv
# Run argus to get the source and destination ports, merge both columns together and count how many occurances
racluster -r packet.argus -n -s sport dport > ports.csv
ra -r packet.argus -s stime ltime sport dport - dst port 53 > DNS.csv
ra -r packet.argus -s stime ltime sport dport - dst port 123 > NTP.csv
for file in *.pcap
do
dir=$(echo $file| awk -F_ '{print $1}')
mkdir $dir
# Move all newly created .csv files to the new directory
mv $file*.csv $dir
done
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.