簡體   English   中英

Spark:迭代算法實現中的StackOverflow錯誤

[英]Spark: StackOverflow Error in iterative algorithm implementation

我正在嘗試使用Scala在Spark中迭代算法,但是Spark拋出StackOverflowError 我的代碼是

object demo {

def main(args: Array[String]): Unit = {

val N =             60        //args(0).toInt      // Population length
val d =              60       //args(1).toInt     //Dimensions ; Number of unknown Decision variables
var Iterations =     400       //args(2).toInt     // Maximum Number of Iteration
val nP =           3         //args(3).toInt      // number of partitions

val MinVal =       -100         //args(6).toInt    //  Lower Bound
val MaxVal =      100          //args(7).toInt    //  Upper Bound
val Fmin = 0                    // Minimum Frequency
val Fmax = 1                    // Maximum Frequency
val Bandwidth = 0.001
val InitialPulseRate = 0.1
val alpha = 0.95
val gyma = 0.95

var GlobalBest_Fitness = Double.PositiveInfinity
val batList = List.fill(N)(new obj(d, MinVal, MaxVal))

batList.map { x =>
  x.fitness =SphereFunc(x.position) // Update Fitness
}
GlobalBest_Fitness = batList.minBy(_.fitness).fitness
val conf = new SparkConf().setMaster(locally("local")).setAppName("spark Demo")
val sc = new SparkContext(conf)
val rdd = sc.parallelize(batList, nP)
var partitioned = rdd
partitioned.persist()
var itrU = 0
Fun(Iterations)

@tailrec
def Fun( itr: Int): RDD[obj] = itr match {
  case 0 => partitioned// Base case for recursion
  case _ => {
    itrU = Iterations - itr + 1
    partitioned =Updater(partitioned,itrU , Bandwidth)
    Fun( itr - 1) //Recursive call
    }
   }
  } 

錯誤消息的簡短摘要是

Exception in thread "main" java.lang.StackOverflowError
    at java.io.ObjectStreamClass$WeakClassKey.<init>(ObjectStreamClass.java:2505)
    at java.io.ObjectStreamClass.lookup(ObjectStreamClass.java:348)
    at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1134)
    at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:348)
    at scala.collection.immutable.List$SerializationProxy.writeObject(List.scala:468)

我該如何解決此錯誤? 我必須運行數千次迭代,並且維數(d)和總體大小(N)也具有更大的值。 我使用了tail遞歸函數,因此它可以在恆定空間中運行,但行為並非如此。

stackoverflow錯誤與spark.driver和spark.executor內存分配及其1gb的spark默認配置值有關。 sparkConfig應該如下更新。 如果全部存儲;定位仍然不足,請嘗試增加值。

val conf = new SparkConf().setMaster(locally("local")).setAppName("spark Demo").set("spark.executor.memory", "4g").set("spark.driver.memory", "4g");

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM