简体   繁体   中英

Hadoop - How to run another mapreduce job while one is running?

I already have a high time consuming map reduce job running on my cluster. When I am submitting another job, it is stuck at the below point which suggests that it is waiting for currently running job to complete:

hive> select distinct(circle) from vf_final_table_orc_format1;
Query ID = hduser_20181022153503_335ffd89-1528-49be-b091-21213d702a03
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks not specified. Estimated from input data size: 10
In order to change the average load for a reducer (in bytes):
set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
set mapreduce.job.reduces=<number>
Starting Job = job_1539782606189_0033, Tracking URL = http://secondary:8088/proxy/application_1539782606189_0033/
Kill Command = /home/hduser/hadoop/bin/hadoop job  -kill job_1539782606189_0033

I am running a mapreduce job on 166GB of data currently. My setup included 7 nodes out of which 5 are DN with 32GB RAM and 8.7TB HDD while 1 NN and 1 SN has 32 GB RAM and 1.1TB HDD .

What settings do I need to tweak in order to execute the jobs in parallel? I am currently using hadoop 2.5.2 version.

EDIT : Right now my cluster is consuming only 8-10 GB of RAM out of 32 GB per node. The other HIVE queries,MR Jobs are stuck and are waiting for a single job to finish. How do I increase the memory consumption to facilitate more jobs executing in parallel. Here is the current output of ps command :

[hduser@secondary ~]$ ps -ef | grep -i runjar | grep -v grep
hduser   110398      1  0 Nov11 ?        00:07:15 /opt/jdk1.8.0_77//bin/java -Dproc_jar -Xmx1000m 
-Dhadoop.log.dir=/home/hduser/hadoop/logs -Dyarn.log.dir=/home/hduser/hadoop/logs 
-Dhadoop.log.file=yarn.log -Dyarn.log.file=yarn.log -Dyarn.home.dir= 
-Dyarn.id.str= -Dhadoop.root.logger=INFO,console -Dyarn.root.logger=INFO,console -Dyarn.policy.file=hadoop-policy.xml
-Dhadoop.log.dir=/home/hduser/hadoop/logs -Dyarn.log.dir=/home/hduser/hadoop/logs 
-Dhadoop.log.file=yarn.log -Dyarn.log.file=yarn.log 
-Dyarn.home.dir=/home/hduser/hadoop -Dhadoop.home.dir=/home/hduser/hadoop 
-Dhadoop.root.logger=INFO,console 
-Dyarn.root.logger=INFO,console 
-classpath /home/hduser/hadoop/etc/hadoop:/home/hduser/hadoop/etc/hadoop:/home/hduser/hadoop/etc/hadoop:/home/hduser/hadoop/share/hadoop/common/lib/*:/home/hduser/hadoop/share/hadoop/common/*:/home/hduser/hadoop/share/hadoop/hdfs:/home/hduser/hadoop/share/hadoop/hdfs/lib/*:/home/hduser/hadoop/share/hadoop/hdfs/*:/home/hduser/hadoop/share/hadoop/yarn/lib/*:/home/hduser/hadoop/share/hadoop/yarn/*:/home/hduser/hadoop/share/hadoop/mapreduce/lib/*:/home/hduser/hadoop/share/hadoop/mapreduce/*:/home/hduser/hadoop/contrib/capacity-scheduler/*.jar:/home/hduser/hadoop/share/hadoop/yarn/*:/home/hduser/hadoop/share/hadoop/yarn/lib/* 
org.apache.hadoop.util.RunJar abc.jar def.mydriver2 /raw_data /mr_output/

STEPS

Hive runs query plans in stages. Some stages depend on other stages and cannot be started until the previous stages have completed.

However, some other stages can run concurrently with others. Having stages run in parallel can save the overall job running time. To enable parallel execution of stages, do:

set hive.exec.parallel=true;
set hive.exec.parallel.thread.number=8;

Parallel execution will increase the cluster utilization. If the utilization of a cluster is already high, parallel execution will not help much in terms of overall performance.

Let me know if this helps.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM