简体   繁体   中英

How to set number of mapreduce task equal to 1 in hive

I tried following in hive-

set hive.exec.reducers.max = 1;
set mapred.reduce.tasks = 1;

from flat_json
insert overwrite table aggr_pgm_measure PARTITION(dt='${START_TIME}')
reduce  log_time,
 req_id, ac_id, client_key, rulename, categoryname, bsid, visitorid, visitorgroupid, visitortargetid, targetpopulationid, windowsessionid, eventseq, event_code, eventstarttime
 using '${SCRIPT_LOC}/aggregator.pl' as 
 metric_id, metric_value, aggr_type, rule_name, category_name; 

Inspite of setting max number and number of reduced task to 1 I see 2 map reduce task getting generated. Please see below-

 > set hive.exec.reducers.max = 1;
hive>  set mapred.reduce.tasks = 1;
hive>
    > from flat_json
    > insert overwrite table aggr_pgm_measure PARTITION(dt='${START_TIME}')
    > reduce  log_time,
    >  req_id, ac_id, client_key, rulename, categoryname, bsid, visitorid, visitorgroupid, visitortargetid, targetpopulationid, windowsessionid, eventseq, event_code, eventstarttime
    >  using '${SCRIPT_LOC}/aggregator.pl' as
    >  metric_id, metric_value, aggr_type, rule_name, category_name;
converting to local s3://dsp-emr-test/anurag/dsp-test/60mins/script/aggregator.pl
Added resource: /mnt/var/lib/hive_07_1/downloaded_resources/aggregator.pl
Total MapReduce jobs = 2
Launching Job 1 out of 2
Number of reduce tasks is set to 0 since there's no reduce operator
Starting Job = job_201112270825_0009, Tracking URL = http://ip-10-85-66-9.ec2.internal:9100/jobdetails.jsp?jobid=job_201112270825_0009
Kill Command = /home/hadoop/.versions/0.20.205/libexec/../bin/hadoop job  -Dmapred.job.tracker=10.85.66.9:9001 -kill job_201112270825_0009
2011-12-27 10:30:03,542 Stage-1 map = 0%,  reduce = 0%

The two things you think are related are not. You are setting the number of reduce tasks , not MapReduce jobs . Hive will convert your query into several MapReduce jobs, just as the nature of what needs to be done. Each MapReduce job consists of multiple map tasks and reduce tasks .

What you are setting is the maximum number of tasks . That means, each MapReduce job will be constrained by the number of tasks it can fire up. You still need to run two jobs, though. There is nothing you can do about the number of MapReduce jobs with Hive. It needs to run each stage in order to execute your query.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM