简体   繁体   English

在Spark提交作业中读取拼花文件时出现内存不足错误

[英]Getting out of memory error while reading parquet file in spark submit job

[Stage 0:>                                                          (0 + 0) / 8]SLF4J: Failed to load class "org.slf4j.impl.StaticLoggerBinder".
SLF4J: Defaulting to no-operation (NOP) logger implementation
SLF4J: See http://www.slf4j.org/codes.html#StaticLoggerBinder for further details.
[Stage 1:=====================================================>   (43 + 3) / 46]17/11/16 13:11:18 ERROR Executor: Exception in task 0.0 in stage 2.0 (TID 54)
java.lang.OutOfMemoryError: Java heap space
    at java.util.Arrays.copyOf(Arrays.java:3236)
    at java.io.ByteArrayOutputStream.grow(ByteArrayOutputStream.java:118)
    at java.io.ByteArrayOutputStream.ensureCapacity(ByteArrayOutputStream.java:93)
    at java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:153)
    at java.io.ObjectOutputStream$BlockDataOutputStream.drain(ObjectOutputStream.java:1877)
    at java.io.ObjectOutputStream$BlockDataOutputStream.setBlockDataMode(ObjectOutputStream.java:1786)
    at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1189)
    at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:348)
    at org.apache.spark.serializer.JavaSerializationStream.writeObject(JavaSerializer.scala:44)
    at org.apache.spark.serializer.JavaSerializerInstance.serialize(JavaSerializer.scala:84)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:240)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
17/11/16 13:11:18 ERROR SparkUncaughtExceptionHandler: Uncaught exception in thread Thread[Executor task launch worker-4,5,main]
java.lang.OutOfMemoryError: Java heap space
    at java.util.Arrays.copyOf(Arrays.java:3236)
    at java.io.ByteArrayOutputStream.grow(ByteArrayOutputStream.java:118)
    at java.io.ByteArrayOutputStream.ensureCapacity(ByteArrayOutputStream.java:93)
    at java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:153)
    at java.io.ObjectOutputStream$BlockDataOutputStream.drain(ObjectOutputStream.java:1877)
    at java.io.ObjectOutputStream$BlockDataOutputStream.setBlockDataMode(ObjectOutputStream.java:1786)
    at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1189)
    at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:348)
    at org.apache.spark.serializer.JavaSerializationStream.writeObject(JavaSerializer.scala:44)
    at org.apache.spark.serializer.JavaSerializerInstance.serialize(JavaSerializer.scala:84)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:240)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)
17/11/16 13:11:18 WARN TaskSetManager: Lost task 0.0 in stage 2.0 (TID 54, localhost): java.lang.OutOfMemoryError: Java heap space
    at java.util.Arrays.copyOf(Arrays.java:3236)
    at java.io.ByteArrayOutputStream.grow(ByteArrayOutputStream.java:118)
    at java.io.ByteArrayOutputStream.ensureCapacity(ByteArrayOutputStream.java:93)
    at java.io.ByteArrayOutputStream.write(ByteArrayOutputStream.java:153)
    at java.io.ObjectOutputStream$BlockDataOutputStream.drain(ObjectOutputStream.java:1877)
    at java.io.ObjectOutputStream$BlockDataOutputStream.setBlockDataMode(ObjectOutputStream.java:1786)
    at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1189)
    at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:348)
    at org.apache.spark.serializer.JavaSerializationStream.writeObject(JavaSerializer.scala:44)
    at org.apache.spark.serializer.JavaSerializerInstance.serialize(JavaSerializer.scala:84)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:240)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)

This is my code- 这是我的代码-

val sqlContext = new SQLContext(sc)
    //sqlContext.setConf("spark.sql.inMemoryColumnarStorage.compressed", "true")
    log.setLevel(Level.INFO)
    val config = HBaseConfiguration.create()
    val newDataDF = sqlContext.read.parquet(file)
    newDataDF.registerTempTable("newDataDF")
    //sqlContext.cacheTable("newDataDF")
    val result = sqlContext.sql("SELECT rec FROM newDataDF")
    val rows = result.map(t => t(0)).collect()
    //val rows = result.map(t => t.getAs[String]("rec"))

It throws out of memory at below line //val rows = result.map(t => t(0)).collect() 它在下面的行//的内存行中抛出了内存不足// val行= result.map(t => t(0))。collect()

Have tried all options of memory tuning and increasing executor/driver memory increase, but nothing seems to work. 尝试了内存调整和增加执行程序/驱动程序内存增加的所有选项,但似乎无济于事。 Any advise would be greatly appreciated. 任何建议将不胜感激。

Well, by calling collect on your DataFrame , you tell Spark to gather ALL data onto the driver. 好吧,通过在DataFrame上调用collect ,可以告诉Spark将所有数据收集到驱动程序中。 For larger datasets this will indeed drown the driver and cause OOMs. 对于较大的数据集,这确实会淹没驱动程序并导致OOM。

Spark is a framework for distributed computing intended to be used on large dataset that will not fit on a single machine. Spark是用于分布式计算的框架,旨在用于大型数据集,而该大型数据集将无法在一台计算机上使用。 Only in very few cases do you ever want to call collect on a DataFrame and that is when you do debugging (on small datasets) or you know that you dataset has been reduced vastly in size due to some filtering or aggregation transformations. 仅在极少数情况下,您才想在DataFrame上调用collectDataFrame ,当您在小型数据集上进行调试时,或者您知道由于某些过滤或聚合转换,数据集的大小已大大减少。

you have to increase spark.driver.memory which default value is 1gb. 您必须增加默认值为1gb的spark.driver.memory you can check the driver and executor memory using --verbose command. 您可以使用--verbose命令检查驱动程序和执行程序的内存。 For more information check this link and set the memory as per your requirement. 有关更多信息,请检查此链接并根据您的要求设置内存。 https://spark.apache.org/docs/latest/configuration.html https://spark.apache.org/docs/latest/configuration.html

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM