EDIT 2: I've decided to re-write this in order to better portray my outcome.
I'm currently using this code to output a list of files within various directories:
for file in /directoryX/*.txt
do
grep -rl "Annual Compensation" $file
done
The output shows all files that have a certain table I'm trying to extract in a layout like this:
txtfile1.txt
txtfile2.txt
txtfile3.txt
I have been using this awk command on each individual .txt file to extract the table and then send it to a .csv:
awk '/Annual Compensation/{f=1} f{print; if (/<\/TABLE>/) exit}' txtfile1.txt > txtfile1.csv
My goal is to find a command that will run my awk command against each file in the list all at once. Thank you to those that have provided suggestions already.
If I understand what you're asking, I think what you want to do is add a line after the grep, or instead of the grep, that says:
awk '/Annual Compensation/{f=1} f{print; if (/<\/TABLE>/) exit}' $file > ${file}_new.csv
When you say ${file}_new.csv
, it expands the file
variable, then adds the string "_new.csv" to it. That's what you're shooting for, right?
Modifying your code:
for file in /directoryX/*.txt
do
files+=($(grep -rl "Annual Compensation" $file))
done
for f in "${files[@]}";do
awk '/Annual Compensation/{f=1} f{print; if (/<\/TABLE>/) exit}' "$f" > "$f"_new.csv
done
Alternative code:
files+=($(grep -rl "Annual Compensation" /directoryX/*))
for f in "${files[@]}";do
awk '/Annual Compensation/{f=1} f{print; if (/<\/TABLE>/) exit}' "$f" > "$f"_new.csv
In both cases, the grep results and awk results are not verified by me - it is just a copy - paste of your code.
The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.