簡體   English   中英

Hive查詢失敗,出現堆問題

[英]Hive Query failing with Heap Issue

下面是配置單元查詢正在運行,

插入表temp.table_output

SELECT /*+ STREAMTABLE(tableB) */ c.column1 as client, a.column2 as testData, 
    CASE WHEN ca.updated_date IS NULL OR ca.updated_date = 'null' THEN null ELSE CONCAT(ca.updated_date, '+0000') END as update
    FROM temp.tableA as a 
    INNER JOIN default.tableB as ca ON a.column5=ca.column2
    INNER JOIN default.tableC as c ON ca.column3=c.column1 WHERE a.name='test';

TableB具有24億行(140 GB),TableA和TableC具有2億條記錄。

群集由3個Cassandra數據節點和3個Analytics節點(位於cassandra頂部的Hive)組成,每個節點上有130GB的內存。

TableA,TableB,TableC是配置單元內部表。

Hive群集堆大小為12GB。

有人可以告訴我我運行蜂巢查詢時遇到堆問題,但無法完成任務嗎? 它是在配置單元服務器上運行的唯一作業。

任務失敗,出現以下錯誤,

Caused by: java.io.IOException: Read failed from file: cfs://172.31.x.x/tmp/hive-root/hive_2015-03-17_00-27-25_132_17376615815827139-1/-mr-10002/000049_0
    at com.datastax.bdp.hadoop.cfs.CassandraInputStream.read(CassandraInputStream.java:178)
    at java.io.DataInputStream.readFully(DataInputStream.java:195)
    at java.io.DataInputStream.readFully(DataInputStream.java:169)
    at org.apache.hadoop.io.SequenceFile$Reader.init(SequenceFile.java:1508)
    at org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1486)
    at org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1475)
    at org.apache.hadoop.io.SequenceFile$Reader.<init>(SequenceFile.java:1470)
    at org.apache.hadoop.mapred.SequenceFileRecordReader.<init>(SequenceFileRecordReader.java:43)
    at org.apache.hadoop.mapred.SequenceFileInputFormat.getRecordReader(SequenceFileInputFormat.java:59)
    at org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.<init>(CombineHiveRecordReader.java:65)
    ... 16 more

Caused by: java.io.IOException: org.apache.thrift.TApplicationException: Internal error processing get_remote_cfs_sblock
    at com.datastax.bdp.hadoop.cfs.CassandraFileSystemThriftStore.retrieveSubBlock(CassandraFileSystemThriftStore.java:537)
    at com.datastax.bdp.hadoop.cfs.CassandraSubBlockInputStream.subBlockSeekTo(CassandraSubBlockInputStream.java:145)
    at com.datastax.bdp.hadoop.cfs.CassandraSubBlockInputStream.read(CassandraSubBlockInputStream.java:95)
    at com.datastax.bdp.hadoop.cfs.CassandraInputStream.read(CassandraInputStream.java:159)
    ... 25 more

Caused by: org.apache.thrift.TApplicationException: Internal error processing get_remote_cfs_sblock
    at org.apache.thrift.TApplicationException.read(TApplicationException.java:111)
    at org.apache.thrift.TServiceClient.receiveBase(TServiceClient.java:71)
    at org.apache.cassandra.thrift.Dse$Client.recv_get_remote_cfs_sblock(Dse.java:271)
    at org.apache.cassandra.thrift.Dse$Client.get_remote_cfs_sblock(Dse.java:254)
    at sun.reflect.GeneratedMethodAccessor11.invoke(Unknown Source)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:606)
    at com.datastax.bdp.util.CassandraProxyClient.invokeDseClient(CassandraProxyClient.java:655)
    at com.datastax.bdp.util.CassandraProxyClient.invoke(CassandraProxyClient.java:631)
    at com.sun.proxy.$Proxy5.get_remote_cfs_sblock(Unknown Source)
    at com.datastax.bdp.hadoop.cfs.CassandraFileSystemThriftStore.retrieveSubBlock(CassandraFileSystemThriftStore.java:515)
    ... 28 more

Hive.log

2015-03-17 23:10:39,576 ERROR exec.Task (SessionState.java:printError(419)) - Examining task ID: task_201503171816_0036_r_000023 (and more) from job job_201503171816_0036
2015-03-17 23:10:39,579 ERROR exec.Task (SessionState.java:printError(419)) - Examining task ID: task_201503171816_0036_r_000052 (and more) from job job_201503171816_0036
2015-03-17 23:10:39,582 ERROR exec.Task (SessionState.java:printError(419)) - Examining task ID: task_201503171816_0036_m_000207 (and more) from job job_201503171816_0036
2015-03-17 23:10:39,585 ERROR exec.Task (SessionState.java:printError(419)) - Examining task ID: task_201503171816_0036_r_000087 (and more) from job job_201503171816_0036
2015-03-17 23:10:39,588 ERROR exec.Task (SessionState.java:printError(419)) - Examining task ID: task_201503171816_0036_m_000223 (and more) from job job_201503171816_0036
2015-03-17 23:10:39,591 ERROR exec.Task (SessionState.java:printError(419)) - Examining task ID: task_201503171816_0036_m_000045 (and more) from job job_201503171816_0036
2015-03-17 23:10:39,594 ERROR exec.Task (SessionState.java:printError(419)) - Examining task ID: task_201503171816_0036_m_000235 (and more) from job job_201503171816_0036
2015-03-17 23:10:39,597 ERROR exec.Task (SessionState.java:printError(419)) - Examining task ID: task_201503171816_0036_m_002140 (and more) from job job_201503171816_0036
2015-03-17 23:10:39,761 ERROR exec.Task (SessionState.java:printError(419)) - 
Task with the most failures(4): 
-----
Task ID:
  task_201503171816_0036_m_000036

URL:
  http://sjvtncasl064.mcafee.int:50030/taskdetails.jsp?jobid=job_201503171816_0036&tipid=task_201503171816_0036_m_000036
-----
Diagnostic Messages for this Task:
Error: Java heap space

2015-03-17 23:10:39,777 ERROR ql.Driver (SessionState.java:printError(419)) - FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask

大多數情況下,跟蹤器端的hadoop錯誤不是很容易描述-就像“從一個節點檢索數據時出現問題”。 為了找出實際發生的情況,您需要獲取system.log,每個節點的蜂巢和hadoop任務日志,尤其是那些沒有及時返回數據的節點,並查看問題所在。錯誤的時間。 實際上,您可以在Ops Center中單擊正在進行的配置單元作業,並觀察每個節點上發生了什么,然后查看導致作業中斷的錯誤是什么。

這里有一些我發現非常有用的鏈接。 這些鏈接中的一些鏈接適用於DSE的較早版本,但是它們仍然為如何優化Hadoop操作和內存管理提供了一個良好的開端。

http://www.datastax.com/dev/blog/tuning-dse-hadoop-map-reduce

http://www.datastax.com/documentation/datastax_enterprise/4.0/datastax_enterprise/ana/anaHivTune.html

https://support.datastax.com/entries/23459322-Tuning-memory-for-Hadoop-tasks

https://support.datastax.com/entries/23472546-Specification-the-number-of-concurrent-tasks-per-node

您可能還想閱讀這篇文章 有時,超時可能是由於垃圾收集量較大。

高溫超導

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM