繁体   English   中英

火花错误:java.lang.IllegalArgumentException:大小超过Integer.MAX_VALUE

[英]spark error:java.lang.IllegalArgumentException: Size exceeds Integer.MAX_VALUE

我尝试计算负样本的数量,如下所示:

val numNegatives = dataSet.filter(col("label") < 0.5).count

但我收到一个大小超过Integer.MAX_VALUE的错误:

java.lang.IllegalArgumentException: Size exceeds Integer.MAX_VALUE
    at sun.nio.ch.FileChannelImpl.map(FileChannelImpl.java:869)
    at org.apache.spark.storage.DiskStore$$anonfun$getBytes$2.apply(DiskStore.scala:127)
    at org.apache.spark.storage.DiskStore$$anonfun$getBytes$2.apply(DiskStore.scala:115)
    at org.apache.spark.util.Utils$.tryWithSafeFinally(Utils.scala:1239)
    at org.apache.spark.storage.DiskStore.getBytes(DiskStore.scala:129)
    at org.apache.spark.storage.DiskStore.getBytes(DiskStore.scala:136)
    at org.apache.spark.storage.BlockManager.doGetLocal(BlockManager.scala:512)
    at org.apache.spark.storage.BlockManager.getLocal(BlockManager.scala:427)
    at org.apache.spark.storage.BlockManager.get(BlockManager.scala:636)
    at org.apache.spark.CacheManager.getOrCompute(CacheManager.scala:44)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:268)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
    at org.apache.spark.rdd.MapPartitionsRDD.compute(MapPartitionsRDD.scala:38)
    at org.apache.spark.rdd.RDD.computeOrReadCheckpoint(RDD.scala:306)
    at org.apache.spark.rdd.RDD.iterator(RDD.scala:270)
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:73)
    at org.apache.spark.scheduler.ShuffleMapTask.runTask(ShuffleMapTask.scala:41)
    at org.apache.spark.scheduler.Task.run(Task.scala:89)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:227)
    at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
    at java.lang.Thread.run(Thread.java:745)

一些解析者建议添加分区号,因此我将上述代码更新如下:

val data = dataSet.repartition(5000).cache()
val numNegatives = data.filter(col("label") < 0.5).count

但是它报告了同样的错误! 几天让我感到困惑。 谁能帮我? 谢谢。

尝试在过滤器之前重新分区:

val numNegatives = dataSet.repartition(1000).filter(col(“ label”)<0.5).count

筛选器将使用原始DataSet分区执行,然后将结果重新分区。 您需要使用较小的分区来进行过滤。

她的问题是在实现后,ShuffleRDD块的大小大于2GB。 Spark有此限制 您需要更改spark.sql.shuffle.partitions参数,该参数默认设置为200。

另外,您可能需要增加数据集具有的分区数。 重新分区并首先保存,然后读取新数据集并执行操作。

spark.sql("SET spark.sql.shuffle.partitions = 10000")
dataset.repartition(10000).write.parquet("/path/to/hdfs")
val newDataset = spark.read.parquet("/path/to/hdfs")  
newDatase.filter(...).count

或者,如果您想使用配置单元表

spark.sql("SET spark.sql.shuffle.partitions = 10000")
dataset.repartition(10000).asveAsTable("newTableName")
val newDataset = spark.table("newTableName")  
newDatase.filter(...).count              

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM