簡體   English   中英

java.lang.StackOverflowError引發spark-submit,但不能在IDE中運行

[英]java.lang.StackOverflowError throw in spark-submit but not in running in IDE

我已經開發了用於協作過濾的Spark 2.2應用程序。 它可以在IntelliJ中正常運行或調試。 我也可以輸入Spark Web UI來檢查過程。 但是,當我嘗試將其部署到EMR並在本地測試spark-submit時,程序無法正常運行。

spark Submit命令的一部分:

spark-submit -v --master local[*] --deploy-mode client --executor-memory 4G --num-executors 10 --conf spark.executor.extraJavaOptions="-Xss200M " --conf spark.executor.memory="500M" 
def finalStep(sc: SparkContext): Unit = {
        val sameModel = MatrixFactorizationModel.load(sc, "CollaborativeFilter")
        val globalInterestStats = mutable.Map[
            Int, (DescriptiveStatistics, mutable.MutableList[Rating])
        ]()

        val taxonsForUsers = sameModel.recommendProductsForUsers(200)

        taxonsForUsers
            .collect()
            .flatMap(userToInterestArr => {
                userToInterestArr._2.map(rating => {
                    if (globalInterestStats.get(rating.product).isEmpty) {
                        globalInterestStats(rating.product) = (
                            new DescriptiveStatistics(),
                            mutable.MutableList[Rating]()
                        )
                    }

                    globalInterestStats(rating.product)._1.addValue(rating.rating)

                    (rating, userToInterestArr._2)
                })
            })
            .foreach(ratingToUserInterestArr => {
                val rating = ratingToUserInterestArr._1

                if (globalInterestStats.get(rating.product).isDefined) {
                    val interestStats = globalInterestStats(rating.product)
                    val userInterests = ratingToUserInterestArr._2

                    if (rating.rating >= interestStats._1.getPercentile(75)) {
                        userInterests.foreach(each => interestStats._2 += each)
                    }
                }
            })

        println(globalInterestStats.toSeq.length) // ~300

        val globalInterestRDD = sc.parallelize(globalInterestStats.toSeq, 100)// No. of partition does not matter
        val nGlobalInterests = globalInterestStats.map(each => each._2._2.length).sum

// It was not working in spark-submit but I managed to convert this part of code to simplify code before creating the RDD
        val taxonIDFMap = sc.parallelize(
                globalInterestStats
                    .toSeq
                    .flatMap(each => {
                        each._2._2
                            .foldLeft(mutable.Map[Int, Double]())(op = (accu, value) => {
                                if (accu.get(value.product).isEmpty) {
                                    accu(value.product) = 1
                                } else {
                                    accu(value.product) += 1
                                }

                                accu
                            })
                            .toList
                }), 100)
            .reduceByKey((accu, value) => accu + value)
            .map(each => {
                val a: Double = Math.log10(nGlobalInterests / (1 + each._2)) / Math.log10(2)

                (
                    each._1,
                    a
                )
            })
            .collect()
            .toMap

// Yet I have a way more complicated task need to operate on globalInterestRDD which I cannot simplify the size for Spark to handle
        val result = globalInterestRDD
            .count()

        sc.stop()

        println(result)
    }

Exception in thread "dispatcher-event-loop-1" java.lang.StackOverflowError
    at java.io.ObjectOutputStream$ReplaceTable.lookup(ObjectOutputStream.java:2399)
    at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1113)
    at java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1548)
    at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1509)
    at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1432)
    at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1178)
    at java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1548)
    at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1509)
    at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1432)
    at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1178)
    at java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1548)
    at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1509)
    at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1432)
    ...

我猜它與以下內容高度相關: http : //asyncified.io/2016/12/10/mutablelist-and-the-short-path-to-a-stackoverflowerror/

但是我仍在嘗試理解並修復我的代碼

問題是

val globalInterestStats = mutable.Map[
    Int, (DescriptiveStatistics, mutable.MutableList[Rating])
]()

應該

val globalInterestStats = mutable.Map[
    Int, (DescriptiveStatistics, mutable.ArrayBuffer[Rating])
]()

盡管為什么spark應用程序不能在IDE中運行但不能在spark-submit中運行仍然沒有意義

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM