简体   繁体   中英

java.lang.OutOfMemoryError: Java heap space Error while running hive table scan

We are trying to read a big hive table and the table is of RCFile format. It is processing all the partitions of the table one by one. So far the map task seems to have written 30 spill files.

At one point, it starts processing a file and below is the log,

2018-06-15 00:54:28,977 INFO org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader: Processing file hdfs://xxx:8020/data/csv/7342/2018-06-14/17/1/Network_xxx.dat
2018-06-15 00:54:29,005 INFO org.apache.hadoop.hive.ql.exec.MapOperator: Processing alias ntwk for file hdfs://xxx:8020/data/csv/7342/2018-06-14/17/1
2018-06-15 00:55:04,029 INFO org.apache.hadoop.hive.ql.exec.MapOperator: 7 finished. closing... 
2018-06-15 00:55:04,129 INFO org.apache.hadoop.hive.ql.exec.MapOperator: 7 forwarded 6672342 rows
2018-06-15 00:55:04,266 INFO org.apache.hadoop.hive.ql.exec.TableScanOperator: 3 finished. closing... 
2018-06-15 00:55:04,266 INFO org.apache.hadoop.hive.ql.exec.TableScanOperator: 3 forwarded 0 rows
2018-06-15 00:55:04,513 INFO org.apache.hadoop.hive.ql.exec.ReduceSinkOperator: 2 finished. closing... 
2018-06-15 00:55:04,538 INFO org.apache.hadoop.hive.ql.exec.ReduceSinkOperator: 2 forwarded 0 rows
2018-06-15 00:55:04,563 INFO org.apache.hadoop.hive.ql.exec.TableScanOperator: 3 Close done
2018-06-15 00:55:04,589 INFO org.apache.hadoop.hive.ql.exec.MapOperator: DESERIALIZE_ERRORS:0
2018-06-15 00:55:04,616 INFO org.apache.hadoop.hive.ql.exec.TableScanOperator: 1 finished. closing... 
2018-06-15 00:55:04,641 INFO org.apache.hadoop.hive.ql.exec.TableScanOperator: 1 forwarded 6672342 rows
2018-06-15 00:55:04,666 INFO org.apache.hadoop.hive.ql.exec.ReduceSinkOperator: 0 finished. closing... 
2018-06-15 00:55:04,691 INFO org.apache.hadoop.hive.ql.exec.ReduceSinkOperator: 0 forwarded 0 rows
2018-06-15 00:55:04,716 INFO org.apache.hadoop.hive.ql.exec.TableScanOperator: 1 Close done
2018-06-15 00:55:04,741 INFO org.apache.hadoop.hive.ql.exec.MapOperator: 7 Close done
2018-06-15 00:55:04,792 INFO ExecMapper: ExecMapper: processed 6672342 rows: used memory = 412446808
2018-06-15 00:55:10,316 INFO org.apache.hadoop.mapred.TaskLogsTruncater: Initializing logs' truncater with mapRetainSize=-1 and reduceRetainSize=-1
2018-06-15 00:55:10,852 FATAL org.apache.hadoop.mapred.Child: Error running child : java.lang.OutOfMemoryError: Java heap space
    at org.apache.hadoop.io.compress.DecompressorStream.<init>(DecompressorStream.java:50)
    at org.apache.hadoop.io.compress.BlockDecompressorStream.<init>(BlockDecompressorStream.java:50)
    at org.apache.hadoop.io.compress.SnappyCodec.createInputStream(SnappyCodec.java:173)
    at org.apache.hadoop.hive.ql.io.RCFile$Reader.nextKeyBuffer(RCFile.java:1447)
    at org.apache.hadoop.hive.ql.io.RCFile$Reader.next(RCFile.java:1602)
    at org.apache.hadoop.hive.ql.io.RCFileRecordReader.next(RCFileRecordReader.java:98)
    at org.apache.hadoop.hive.ql.io.RCFileRecordReader.next(RCFileRecordReader.java:85)
    at org.apache.hadoop.hive.ql.io.RCFileRecordReader.next(RCFileRecordReader.java:39)
    at org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.doNext(HiveContextAwareRecordReader.java:274)
    at org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.doNext(CombineHiveRecordReader.java:101)
    at org.apache.hadoop.hive.ql.io.CombineHiveRecordReader.doNext(CombineHiveRecordReader.java:41)
    at org.apache.hadoop.hive.ql.io.HiveContextAwareRecordReader.next(HiveContextAwareRecordReader.java:108)
    at org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.doNextWithExceptionHandler(HadoopShimsSecure.java:329)
    at org.apache.hadoop.hive.shims.HadoopShimsSecure$CombineFileRecordReader.next(HadoopShimsSecure.java:247)
    at org.apache.hadoop.mapred.MapTask$TrackedRecordReader.moveToNext(MapTask.java:215)
    at org.apache.hadoop.mapred.MapTask$TrackedRecordReader.next(MapTask.java:200)
    at org.apache.hadoop.mapred.MapRunner.run(MapRunner.java:48)
    at org.apache.hadoop.mapred.MapTask.runOldMapper(MapTask.java:417)
    at org.apache.hadoop.mapred.MapTask.run(MapTask.java:332)
    at org.apache.hadoop.mapred.Child$4.run(Child.java:268)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:415)
    at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1408)
    at org.apache.hadoop.mapred.Child.main(Child.java:262)

How was it able to process other files and why it has hit OutOfMemory error at this point of time. What would have happened internally ?

What is the difference between heap space and map buffer in a map task ?

The below are some of the configurations :

mapred.map.child.java.opts : -Xmx512M
mapred.job.reduce.memory.mb : -1
mapred.job.map.memory.mb : -1

Hive map task has started printing the statistics also, the number of rows it has processed and memory it has used. The memory used seems to be well below the configured memory.

Why 6 seconds after printing the stats, it has registered OutOfMemory Error. I'm unable to understand what would gone wrong.

**2018-06-15 00:55:04,741 INFO org.apache.hadoop.hive.ql.exec.MapOperator: 7 Close done
2018-06-15 00:55:04,792 **INFO ExecMapper: ExecMapper: processed 6672342 rows: used memory = 412446808**
2018-06-15 00:55:10,316 INFO org.apache.hadoop.mapred.TaskLogsTruncater: Initializing logs' truncater with mapRetainSize=-1 and reduceRetainSize=-1
2018-06-15 00:55:10,852 **FATAL org.apache.hadoop.mapred.Child: Error running child : java.lang.OutOfMemoryError: Java heap space**
    at org.apache.hadoop.io.compress.DecompressorStream.<init>(DecompressorStream.java:50)
    at org.apache.hadoop.io.compress.BlockDecompressorStream.<init>(BlockDecompressorStream.java:50)
    at org.apache.hadoop.io.compress.SnappyCodec.createInputStream(SnappyCodec.java:173)
    at org.apache.hadoop.hive.ql.io.RCFile$Reader.nextKeyBuffer(RCFile.java:1447)**

尝试增加堆大小:

mapred.map.child.java.opts : -Xmx8192M

You can try by increasing memory limit for reducer at run time. For details please refer to blog: https://dataanalyticstrend.blogspot.com/2020/04/what-is-hive-and-how-to-solve-hive-heap.html

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM