簡體   English   中英

在Hive SQL中使用insert into時限制文件編號

[英]Limit the file numbers when using insert into in hive sql

每次我在Hive sql中執行insert into時,都會創建一個文件,在使用insert into時如何限制文件數?

恐怕有一天hdfs系統中的文件太多會破壞它。

hive> insert into table bi_st.st_usr_member_active_day
    > select * from bi_temp.zjy_ini_st_usr_member_active_day_temp88;
Query ID = root_20170209100404_5acdd3bf-071d-4178-aeff-b40d16499aac
Total jobs = 1
Launching Job 1 out of 1
Number of reduce tasks determined at compile time: 2
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapreduce.job.reduces=<number>
Starting Job = job_1484675879577_4078, Tracking URL = http://hadoopmaster:8088/proxy/application_1484675879577_4078/
Kill Command = /opt/cloudera/parcels/CDH-5.9.0-1.cdh5.9.0.p0.23/lib/hadoop/bin/hadoop job  -kill job_1484675879577_4078
Hadoop job information for Stage-1: number of mappers: 1; number of reducers: 2
2017-02-09 10:04:41,247 Stage-1 map = 0%,  reduce = 0%
2017-02-09 10:04:47,425 Stage-1 map = 100%,  reduce = 0%, Cumulative CPU 1.17 sec
2017-02-09 10:04:53,598 Stage-1 map = 100%,  reduce = 50%, Cumulative CPU 3.02 sec
2017-02-09 10:04:57,727 Stage-1 map = 100%,  reduce = 100%, Cumulative CPU 4.81 sec
MapReduce Total cumulative CPU time: 4 seconds 810 msec
Ended Job = job_1484675879577_4078
Loading data to table bi_st.st_usr_member_active_day
Table bi_st.st_usr_member_active_day stats: [numFiles=8, numRows=548, totalSize=31267, rawDataSize=0]
MapReduce Jobs Launched: 
Stage-Stage-1: Map: 1  Reduce: 2   Cumulative CPU: 4.81 sec   HDFS Read: 56745 HDFS Write: 10220 SUCCESS
Total MapReduce CPU Time Spent: 4 seconds 810 msec
OK

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM