简体   繁体   English

Spark广播/序列化错误

[英]Spark broadcast/serialize error

I've created a CLI driver for a Spark version of a Mahout job called "item similarity" with several tests that all work fine on local[4] Spark standalone. 我已经为Mahout作业的Spark版本创建了一个CLI驱动程序,称为“项目相似性”,其中有几个测试都可以在local [4] Spark独立环境下正常工作。 The code even reads and writes to clustered HDFS. 该代码甚至可以读取和写入群集HDFS。 But switching to clustered Spark has a problem that seems tied to a broadcast and/or serialization. 但是切换到集群Spark的问题似乎与广播和/或序列化有关。

The code uses HashBiMap, which is a Guava Java thing. 该代码使用HashBiMap,这是Guava Java的东西。 There are two of these created for every Mahout drm (a distributed matrix), for bi-directional row and column ID lookup. 对于每个Mahout drm(分布式矩阵),将为双向行和列ID查找创建其中两个。 They are created once and then broadcast for access everywhere. 它们仅创建一次,然后广播以在任何地方访问。

When I run this on clustered Spark I get the following error. 当我在群集Spark上运行时,出现以下错误。 At one point we were using HashMaps and they seemed to work on the cluster. 在某一时刻,我们使用了HashMaps,它们似乎可以在集群上工作。 So I suspect something about the HashBiMap is causing the problem. 因此,我怀疑有关HashBiMap的某些问题导致了问题。 I'm also suspicious that it may have to do with serialization in the broadcast. 我还怀疑这可能与广播中的序列化有关。 Here is a snippet of code and the error. 这是一段代码和错误。

 // create BiMaps for bi-directional lookup of ID by either Mahout ID or external ID
 // broadcast them for access in distributed processes, so they are not recalculated in every task.
 // rowIDDictionary is a HashBiMap[String, Int]
 val rowIDDictionary = asOrderedDictionary(rowIDs) // this creates the HashBiMap in a non-dsitributed manner
 val rowIDDictionary_bcast = mc.broadcast(rowIDDictionary)

 val columnIDDictionary = asOrderedDictionary(columnIDs)) // this creates the HashBiMap in a non-dsitributed manner
 val columnIDDictionary_bcast = mc.broadcast(columnIDDictionary)

 val indexedInteractions =
   interactions.map { case (rowID, columnID) =>   //<<<<<<<<<<< this is the stage being submitted before the error
     val rowIndex = rowIDDictionary_bcast.value.get(rowID).get
     val columnIndex = columnIDDictionary_bcast.value.get(columnID).get

     rowIndex -> columnIndex
   }

The error seems to happen in executing interactions.map when accessing the _bcast vals. 访问_bcast val时,似乎在执行interacts.map时发生错误。 Any idea where to start looking for this? 任何想法从哪里开始寻找?

14/06/26 11:23:36 INFO scheduler.DAGScheduler: Submitting Stage 9 (MappedRDD[17] at map at TextDelimitedReaderWriter.scala:83), which has no missing parents
14/06/26 11:23:36 INFO scheduler.DAGScheduler: Submitting 2 missing tasks from Stage 9 (MappedRDD[17] at map at TextDelimitedReaderWriter.scala:83)
14/06/26 11:23:36 INFO scheduler.TaskSchedulerImpl: Adding task set 9.0 with 2 tasks
14/06/26 11:23:36 INFO scheduler.TaskSetManager: Starting task 9.0:0 as TID 16 on executor 0: occam4 (PROCESS_LOCAL)
14/06/26 11:23:36 INFO scheduler.TaskSetManager: Serialized task 9.0:0 as 2418 bytes in 0 ms
14/06/26 11:23:36 INFO scheduler.TaskSetManager: Starting task 9.0:1 as TID 17 on executor 0: occam4 (PROCESS_LOCAL)
14/06/26 11:23:36 INFO scheduler.TaskSetManager: Serialized task 9.0:1 as 2440 bytes in 0 ms
14/06/26 11:23:36 WARN scheduler.TaskSetManager: Lost TID 16 (task 9.0:0)
14/06/26 11:23:36 WARN scheduler.TaskSetManager: Loss was due to java.lang.NullPointerException
java.lang.NullPointerException
    at com.google.common.collect.HashBiMap.seekByKey(HashBiMap.java:180)
    at com.google.common.collect.HashBiMap.put(HashBiMap.java:230)
    at com.google.common.collect.HashBiMap.put(HashBiMap.java:218)
    at com.esotericsoftware.kryo.serializers.MapSerializer.read(MapSerializer.java:135)
    at com.esotericsoftware.kryo.serializers.MapSerializer.read(MapSerializer.java:17)
    at com.esotericsoftware.kryo.Kryo.readClassAndObject(Kryo.java:729)
    at org.apache.spark.serializer.KryoDeserializationStream.readObject(KryoSerializer.scala:102)
    at org.apache.spark.broadcast.HttpBroadcast$.read(HttpBroadcast.scala:165)
    at org.apache.spark.broadcast.HttpBroadcast.readObject(HttpBroadcast.scala:56)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:39)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:25)
    at java.lang.reflect.Method.invoke(Method.java:597)
    at java.io.ObjectStreamClass.invokeReadObject(ObjectStreamClass.java:969)
    at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1871)
    at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1775)
    at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1327)
    at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1969)
    at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1893)
    at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1775)
    at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1327)
    at java.io.ObjectInputStream.defaultReadFields(ObjectInputStream.java:1969)
    at java.io.ObjectInputStream.readSerialData(ObjectInputStream.java:1893)
    at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1775)
    at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1327)
    at java.io.ObjectInputStream.readObject(ObjectInputStream.java:349)
    at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:40)
    at org.apache.spark.scheduler.ShuffleMapTask$.deserializeInfo(ShuffleMapTask.scala:69)
    at org.apache.spark.scheduler.ShuffleMapTask.readExternal(ShuffleMapTask.scala:138)
    at java.io.ObjectInputStream.readExternalData(ObjectInputStream.java:1814)
    at java.io.ObjectInputStream.readOrdinaryObject(ObjectInputStream.java:1773)
    at java.io.ObjectInputStream.readObject0(ObjectInputStream.java:1327)
    at java.io.ObjectInputStream.readObject(ObjectInputStream.java:349)
    at org.apache.spark.serializer.JavaDeserializationStream.readObject(JavaSerializer.scala:40)
    at org.apache.spark.serializer.JavaSerializerInstance.deserialize(JavaSerializer.scala:62)
    at org.apache.spark.executor.Executor$TaskRunner$$anonfun$run$1.apply$mcV$sp(Executor.scala:193)
    at org.apache.spark.deploy.SparkHadoopUtil$$anon$1.run(SparkHadoopUtil.scala:42)
    at org.apache.spark.deploy.SparkHadoopUtil$$anon$1.run(SparkHadoopUtil.scala:41)
    at java.security.AccessController.doPrivileged(Native Method)
    at javax.security.auth.Subject.doAs(Subject.java:396)
    at         org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1190)
    at org.apache.spark.deploy.SparkHadoopUtil.runAsUser(SparkHadoopUtil.scala:41)
    at org.apache.spark.executor.Executor$TaskRunner.run(Executor.scala:176)
    at java.util.concurrent.ThreadPoolExecutor$Worker.runTask(ThreadPoolExecutor.java:895)
    at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:918)
    at java.lang.Thread.run(Thread.java:662)

It looks like you are using kryo serialization, are you using this in your local tests as well? 看起来您正在使用kryo序列化,您是否也在本地测试中使用了它? You may wish to explicitly register the class kryo to use Java serilization if kryo serialization for HashBiMap doesn't succeed. 如果对HashBiMap的kryo序列化不成功,则可能希望显式注册kryo类以使用Java序列化。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM