簡體   English   中英

Hive:java.lang.OutOfMemoryError:Java堆空間和正在運行的作業(本地Hadoop)

[英]Hive: java.lang.OutOfMemoryError: Java heap space and Job running in-process (local Hadoop)

我的設置:運行NixOS Linux的Google Cloud Platform中的4個節點集群(1個主服務器,3個工作人員)。

我一直在使用TPC-DS工具包來生成數據和查詢都是標准的。 在較小的數據集/更簡單的查詢上,它們可以正常工作。 我從這里得到的查詢: https//github.com/hortonworks/hive-testbench/tree/hdp3/sample-queries-tpcds

這是第一個, query1.sql

WITH customer_total_return AS 
( 
         SELECT   sr_customer_sk AS ctr_customer_sk , 
                  sr_store_sk    AS ctr_store_sk , 
                  Sum(sr_fee)    AS ctr_total_return 
         FROM     store_returns , 
                  date_dim 
         WHERE    sr_returned_date_sk = d_date_sk 
         AND      d_year =2000 
         GROUP BY sr_customer_sk , 
                  sr_store_sk) 
SELECT   c_customer_id 
FROM     customer_total_return ctr1 , 
         store , 
         customer 
WHERE    ctr1.ctr_total_return > 
         ( 
                SELECT Avg(ctr_total_return)*1.2 
                FROM   customer_total_return ctr2 
                WHERE  ctr1.ctr_store_sk = ctr2.ctr_store_sk) 
AND      s_store_sk = ctr1.ctr_store_sk 
AND      s_state = 'NM' 
AND      ctr1.ctr_customer_sk = c_customer_sk 
ORDER BY c_customer_id limit 100;

起初我遇到的問題是無法運行這個成功,遇到java.lang.OutOfMemoryError: Java heap space

我做的是:

  1. 增加GCP節點功率(高達7.5 GB的RAM和雙核CPU)
  2. 在Hive CLI中設置這些變量:
set mapreduce.map.memory.mb=2048;
set mapreduce.map.java.opts=-Xmx1024m;
set mapreduce.reduce.memory.mb=4096;
set mapreduce.reduce.java.opts=-Xmxe3072m;
set mapred.child.java.opts=-Xmx1024m;

  1. 重新啟動Hive

然后,當涉及1 GB數據集時,此查詢(與其他類似的查詢)一起工作。 我用htop監視了這種情況,內存使用量不超過2GB,而兩個CPU核心幾乎不斷地使用100%。

現在問題是,當涉及到具有更大數據集的更復雜查詢時,錯誤再次開始:

查詢運行一整分鍾左右,但以FAIL結束。 完整的堆棧跟蹤:

hive> with customer_total_return as
    > (select sr_customer_sk as ctr_customer_sk
    > ,sr_store_sk as ctr_store_sk
    > ,sum(SR_FEE) as ctr_total_return
    > from store_returns
    > ,date_dim
    > where sr_returned_date_sk = d_date_sk
    > and d_year =2000
    > group by sr_customer_sk
    > ,sr_store_sk)
    >  select c_customer_id
    > from customer_total_return ctr1
    > ,store
    > ,customer
    > where ctr1.ctr_total_return > (select avg(ctr_total_return)*1.2
    > from customer_total_return ctr2
    > where ctr1.ctr_store_sk = ctr2.ctr_store_sk)
    > and s_store_sk = ctr1.ctr_store_sk
    > and s_state = 'TN'
    > and ctr1.ctr_customer_sk = c_customer_sk
    > order by c_customer_id
    > limit 100;
No Stats for default@store_returns, Columns: sr_returned_date_sk, sr_fee, sr_store_sk, sr_customer_sk
No Stats for default@date_dim, Columns: d_date_sk, d_year
No Stats for default@store, Columns: s_state, s_store_sk
No Stats for default@customer, Columns: c_customer_sk, c_customer_id
Query ID = root_20190811164854_c253c67c-ef94-4351-b4d3-74ede4c5d990
Total jobs = 14
Stage-29 is selected by condition resolver.
Stage-1 is filtered out by condition resolver.
Stage-30 is selected by condition resolver.
Stage-10 is filtered out by condition resolver.
SLF4J: Found binding in [jar:file:/nix/store/jjm6636r99r0irqa03dc1za9gs2b4fx6-source/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/nix/store/q9jpwzbqbg8k8322q785xfavg0p0v18i-hadoop-3.1.1/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
Execution completed successfully
MapredLocal task succeeded
SLF4J: Found binding in [jar:file:/nix/store/jjm6636r99r0irqa03dc1za9gs2b4fx6-source/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/nix/store/q9jpwzbqbg8k8322q785xfavg0p0v18i-hadoop-3.1.1/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]
Execution completed successfully
MapredLocal task succeeded
Launching Job 3 out of 14
Number of reduce tasks is set to 0 since there's no reduce operator
Job running in-process (local Hadoop)
2019-08-11 16:49:19,415 Stage-20 map = 0%,  reduce = 0%
2019-08-11 16:49:22,418 Stage-20 map = 100%,  reduce = 0%
Ended Job = job_local404291246_0005
Launching Job 4 out of 14
Number of reduce tasks is set to 0 since there's no reduce operator
Job running in-process (local Hadoop)
2019-08-11 16:49:24,718 Stage-22 map = 0%,  reduce = 0%
2019-08-11 16:49:27,721 Stage-22 map = 100%,  reduce = 0%
Ended Job = job_local566999875_0006
Launching Job 5 out of 14
Number of reduce tasks not specified. Estimated from input data size: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapreduce.job.reduces=<number>
Job running in-process (local Hadoop)
2019-08-11 16:49:29,958 Stage-2 map = 0%,  reduce = 0%
2019-08-11 16:49:33,970 Stage-2 map = 100%,  reduce = 0%
2019-08-11 16:49:35,974 Stage-2 map = 100%,  reduce = 100%
Ended Job = job_local1440279093_0007
Launching Job 6 out of 14
Number of reduce tasks not specified. Estimated from input data size: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapreduce.job.reduces=<number>
Job running in-process (local Hadoop)
2019-08-11 16:49:37,235 Stage-11 map = 0%,  reduce = 0%
2019-08-11 16:49:40,421 Stage-11 map = 100%,  reduce = 0%
2019-08-11 16:49:42,424 Stage-11 map = 100%,  reduce = 100%
Ended Job = job_local1508103541_0008
SLF4J: Found binding in [jar:file:/nix/store/jjm6636r99r0irqa03dc1za9gs2b4fx6-source/lib/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: Found binding in [jar:file:/nix/store/q9jpwzbqbg8k8322q785xfavg0p0v18i-hadoop-3.1.1/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.
SLF4J: Actual binding is of type [org.apache.logging.slf4j.Log4jLoggerFactory]

2019-08-11 16:49:51 Dump the side-table for tag: 1 with group count: 21 into file: file:/tmp/root/3ab30b3b-380d-40f5-9f72-68788d998013/hive_2019-08-11_16-48-54_393_105456265244058313-1/-local-10019/HashTable-Stage-19/MapJoin-mapfile71--.hashtable
Execution completed successfully
MapredLocal task succeeded
Launching Job 7 out of 14
Number of reduce tasks is set to 0 since there's no reduce operator
Job running in-process (local Hadoop)
2019-08-11 16:49:53,956 Stage-19 map = 100%,  reduce = 0%
Ended Job = job_local2121921517_0009
Stage-26 is filtered out by condition resolver.
Stage-27 is selected by condition resolver.
Stage-4 is filtered out by condition resolver.

2019-08-11 16:50:01 Dump the side-table for tag: 0 with group count: 99162 into file: file:/tmp/root/3ab30b3b-380d-40f5-9f72-68788d998013/hive_2019-08-11_16-48-54_393_105456265244058313-1/-local-10017/HashTable-Stage-17/MapJoin-mapfile60--.hashtable
2019-08-11 16:50:02 Uploaded 1 File to: file:/tmp/root/3ab30b3b-380d-40f5-9f72-68788d998013/hive_2019-08-11_16-48-54_393_105456265244058313-1/-local-10017/HashTable-Stage-17/MapJoin-mapfile60--.hashtable (2832042 bytes)
Execution completed successfully
MapredLocal task succeeded
Launching Job 9 out of 14
Number of reduce tasks is set to 0 since there's no reduce operator
Job running in-process (local Hadoop)
2019-08-11 16:50:04,004 Stage-17 map = 0%,  reduce = 0%
2019-08-11 16:50:05,005 Stage-17 map = 100%,  reduce = 0%
Ended Job = job_local694362009_0010
Stage-24 is selected by condition resolver.
Stage-25 is filtered out by condition resolver.
Stage-5 is filtered out by condition resolver.

SLF4J: Found binding in [jar:file:/nix/store/q9jpwzbqbg8k8322q785xfavg0p0v18i-hadoop-3.1.1/share/hadoop/common/lib/slf4j-log4j12-1.7.25.jar!/org/slf4j/impl/StaticLoggerBinder.class]
2019-08-11 16:50:12 Starting to launch local task to process map join;  maximum memory = 239075328
Execution completed successfully
MapredLocal task succeeded
Launching Job 11 out of 14
Number of reduce tasks is set to 0 since there's no reduce operator
Job running in-process (local Hadoop)
2019-08-11 16:50:14,254 Stage-13 map = 100%,  reduce = 0%
Ended Job = job_local1812693452_0011
Launching Job 12 out of 14
Number of reduce tasks determined at compile time: 1
In order to change the average load for a reducer (in bytes):
  set hive.exec.reducers.bytes.per.reducer=<number>
In order to limit the maximum number of reducers:
  set hive.exec.reducers.max=<number>
In order to set a constant number of reducers:
  set mapreduce.job.reduces=<number>
Job running in-process (local Hadoop)
2019-08-11 16:50:15,481 Stage-6 map = 0%,  reduce = 0%
Ended Job = job_local920309638_0012 with errors
Error during job, obtaining debugging information...
FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask
MapReduce Jobs Launched: 
Stage-Stage-20:  HDFS Read: 8662606197 HDFS Write: 0 SUCCESS
Stage-Stage-22:  HDFS Read: 9339349675 HDFS Write: 0 SUCCESS
Stage-Stage-2:  HDFS Read: 9409277766 HDFS Write: 0 SUCCESS
Stage-Stage-11:  HDFS Read: 9409277766 HDFS Write: 0 SUCCESS
Stage-Stage-19:  HDFS Read: 4704638883 HDFS Write: 0 SUCCESS
Stage-Stage-17:  HDFS Read: 4771516428 HDFS Write: 0 SUCCESS
Stage-Stage-13:  HDFS Read: 4771516428 HDFS Write: 0 SUCCESS
Stage-Stage-6:  HDFS Read: 0 HDFS Write: 0 FAIL
Total MapReduce CPU Time Spent: 0 msec

hive.log文件中的問題仍然相同:

java.lang.Exception: java.lang.OutOfMemoryError: Java heap space

我意識到我的工作節點實際上並沒有做任何事情(htop顯示它們在空閑時只有主節點工作)即使在堆棧跟蹤中:

Job running in-process (local Hadoop)

如何讓Hive使用HDFS而不僅僅是本地Hadoop? 運行hdfs dfs -df -h hdfs:<redacted>:9000/返回

Filesystem                   Size    Used  Available  Use%
hdfs://<redacted>:9000  88.5 G  34.3 G     35.2 G   39%

哪個是正確的,我有3個工作節點,每個節點有30 GB磁盤。

java.lang.OutOfMemoryError: Java heap space如果您嘗試在單個計算機上推送過多數據,則會發生這種情況。

根據提供的查詢,您可以嘗試以下幾種方法:

  1. 將您的連接條件更改為顯式(刪除WHERE CLAUSE並使用INNER / LEFT JOIN )。 例如
FROM     customer_total_return ctr1 
         INNER JOIN store s
             ON ctr1.ctr_store_sk = s.s_store_sk
                AND s_state = 'NM'
         INNER JOIN customer c
             ON ctr1.ctr_customer_sk = c.c_customer_sk
  1. 檢查以下字段之一是否存在偏差數據:
    1. store_returns - > sr_returned_date_sk
    2. store_returns - > sr_store_sk
    3. store_returns - > sr_customer_sk
    4. 客戶 - > c_customer_sk
    5. 商店 - > s_store_sk

有可能其中一個KEY具有高百分比的值,並且可能導致1個節點過載(當數據大小很大時)。

基本上你正在嘗試消除節點重載的可能原因。

如果有幫助,請告訴我。

這可能是資源問題。 Hive查詢在內部執行為Map-Reduce作業。 您可以檢查Hive Map-Reduce作業的作業歷史記錄日志失敗。 與Hive-Query編輯器相比,有時從shell執行查詢更快。

OOM問題大多數時候都與查詢性能有關。

這里有兩個查詢:

第1部分:

WITH customer_total_return AS 

( 
         SELECT   sr_customer_sk AS ctr_customer_sk , 
                  sr_store_sk    AS ctr_store_sk , 
                  Sum(sr_fee)    AS ctr_total_return 
         FROM     store_returns , 
                  date_dim 
         WHERE    sr_returned_date_sk = d_date_sk 
         AND      d_year =2000 
         GROUP BY sr_customer_sk , 
                  sr_store_sk)

第2部分:

SELECT   c_customer_id 
FROM     customer_total_return ctr1 , 
         store , 
         customer 
WHERE    ctr1.ctr_total_return > 
         ( 
                SELECT Avg(ctr_total_return)*1.2 
                FROM   customer_total_return ctr2 
                WHERE  ctr1.ctr_store_sk = ctr2.ctr_store_sk) 

AND      s_store_sk = ctr1.ctr_store_sk 
AND      s_state = 'NM' 
AND      ctr1.ctr_customer_sk = c_customer_sk 
ORDER BY c_customer_id limit 100;

嘗試為hive群集鏈接啟用JMX

並查看查詢部分的內存使用情況。 而且part2內部查詢也是。

可以嘗試對上述查詢進行很少的hive優化:

  1. 使用SORT BY代替ORDER BY子句 - > SORT BY子句,它僅在每個reducer中對數據進行排序

  2. 對連接鍵上的表進行分區以僅讀取特定數據而不是整個表掃描。

  3. 將小hive表緩存在分布式緩存中並使用map side join來減少shuffling例如:

select /*+MAPJOIN(b)*/ col1,col2,col3,col4 from table_A a join table_B b on a.account_number=b.account_number

  1. 如果任何表中存在偏斜數據的可能性,則使用以下參數:

set hive.optimize.skewjoin=true; set hive.skewjoin.key=100000; (即數據的閾值應該轉到一個節點)

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM