简体   繁体   中英

How do I output the results of a HiveQL query to CSV using a shell script?

I would like to run multiple Hive queries, preferably in parallel rather than sequentially, and store the output of each query into a csv file. For example, query1 output in csv1 , query2 output in csv2 , etc. I would be running these queries after leaving work with the goal of having output to analyze during the next business day. I am interested in using a bash shell script because then I'd be able to set-up a cron task to run it at a specific time of day.

I know how to store the results of a HiveQL query in a CSV file, one query at a time. I do that with something like the following:

hive -e 
"SELECT * FROM db.table;" 
" | tr "\t" "," > example.csv;

The problem with the above is that I have to monitor when the process finishes and manually start the next query. I also know how to run multiple queries, in sequence, like so:

hive -f hivequeries.hql

Is there a way to combine these two methods? Is there a smarter way to achieve my goals?

Code answers are preferred since I do not know bash well enough to write it from scratch.

This question is a variant of another question: How do I output the results of a HiveQL query to CSV?

You can run and monitor parallel jobs in a shell script:

#!/bin/bash

#Run parallel processes and wait for their completion

#Add loop here or add more calls
hive -e "SELECT * FROM db.table1;" | tr "\t" "," > example1.csv &
hive -e "SELECT * FROM db.table2;" | tr "\t" "," > example2.csv &
hive -e "SELECT * FROM db.table3;" | tr "\t" "," > example3.csv &

#Note the ampersand in above commands says to create parallel process
#You can wrap hive call in a function an do some logging in it, etc
#And call a function as parallel process in the same way
#Modify this script to fit your needs

#Now wait for all processes to complete

#Failed processes count
FAILED=0

for job in `jobs -p`
do
   echo "job=$job"
   wait $job || let "FAILED+=1"
done   

#Final status check
if [ "$FAILED" != "0" ]; then
    echo "Execution FAILED!  ($FAILED)"
    #Do something here, log or send messege, etc
    exit 1
fi

#Normal exit
#Do something else here
exit 0

There are other ways (using XARGS, GNU parallel) to run parallel processes in shell and a lot of resources on it. Read also https://www.slashroot.in/how-run-multiple-commands-parallel-linux and https://thoughtsimproved.wordpress.com/2015/05/18/parellel-processing-in-bash/

With GNU Parallel it looks like this:

doit() {
  id="$1"
  hive -e "SELECT * FROM db.table$id;" | tr "\t" "," > example"$id".csv
}
export -f doit
parallel --bar doit ::: 1 2 3 4

If your queries do not share the same template you can do:

queries.txt:
SELECT * FROM db.table1;
SELECT id,name FROM db.person;
... other queries ...

cat queries.txt | parallel --bar 'hive -e {} | tr "\t" "," > example{#}.csv'

Spend 15 minute on reading chapter 1+2 of https://doi.org/10.5281/zenodo.1146014 to learn the basics and chapter 7 to learn more on how to run more jobs in parallel.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM