简体   繁体   中英

How to read columns from csv file into array in bash

I have one csv file that is looking like this

NameColumn1;NameColumn2;NameColumn3;NameColumn4
Row1;Row1;Row1;Row1;
Row2;Row2;Row2;Row2;
Row3;Row3;Row3;Row3;

I want to take the values in Row1, Row2 and Row3 from NameColumn1 and put them into array,the same for NameColumn2,NameColumn3 and NameColumn4.

I don't know how much rows I will have but I know the number of columns. Any help?

Thank you.

Assuming you have four columns

 while read a b c d; do 
   ar1+=($a)
   ar2+=($b)
   ar3+=($c)
   ar4+=($d)
done < <(sed 's/;/\t/g' somefile.txt)

You then have 4x arrays called ar1 through ar4 with the column values in them.

With the -a option/flag from the builtin read .

#!/usr/bin/env bash

while IFS=';' read -ra array; do
  ar1+=("${array[0]}")
  ar2+=("${array[1]}")
  ar3+=("${array[2]}")
  ar4+=("${array[3]}")
done < file.csv                                                       

printf '%s\n' "${ar1[@]}" "${ar2[@]}" "${ar3[@]}" "${ar4[@]}"
  • Just need to remember that index of an array starts counting at zero.
  • The only advantage of this solution is that you don't need a variable to assign each field.

Using @SaintHax answer and a tweak; this worked for my application and similarly formatted data. Note the addition of quotes on stored vars, and the removal of the sed command on the redirected file. Removed IFS settings, thanks to @Jetchisel 's comment.

while read -r a b c d; do 
    ar1+=("$a")
    ar2+=("$b") 
    ar3+=("$c") 
    ar4+=("$d") 
done < somefile.csv 

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM