简体   繁体   English

如何通过另一种类方法在Java的SPARK中使用map-function

[英]How to use map-function in SPARK with Java with another class method

    public class RDDExample {
      public static void main(String[] args){
        final JavaSparkContext sc = SparkSingleton.getContext();
        Lemmatizer lemmatizer = new Lemmatizer();
        List<String> dirtyTwits = Arrays.asList(
                "Shipment of gold arrived in a truck",
                "Delivery of silver arrived in a silver truck",
                "Shipment of gold damaged in a fire"
                //итд, дофантазируйте дальше сами :)
        );
        JavaRDD<String> twitsRDD = sc.parallelize(dirtyTwits);

        JavaRDD<List<String>> lemmatizedTwits = twitsRDD.map(new Function<String, List<String>>() {
            @Override
            public List<String> call(String s) throws Exception {
                return lemmatizer.Execute(s);//return List<String>
            }
        });
        System.out.println(lemmatizedTwits.collect());
    }
}

I write it code, but in runtime I have exception Exception in thread "main" org.apache.spark.SparkException: Task not serializable . 我编写代码,但在运行时中,线程“ main” org.apache.spark.SparkException中存在异常Exception:任务不可序列化 I am search it in google, but need for me solution for Java not found. 我在google中搜索它,但是找不到我需要的Java解决方案。 Everywhere code for Scala or easy operations "return s+"qwer"". Scala或简单操作的各处代码“ return s +“ qwer”“。 Where I might read how use methods from another classes in .map? 在哪里可以阅读.map中其他类的使用方法? Or may be who say me how it work? 也许是谁说我如何运作? Sorry for my english. 对不起我的英语不好。 Full traceback 完整回溯

Exception in thread "main" org.apache.spark.SparkException: Task not serializable
    at org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:166)
    at org.apache.spark.util.ClosureCleaner$.clean(ClosureCleaner.scala:158)
    at org.apache.spark.SparkContext.clean(SparkContext.scala:1435)
    at org.apache.spark.rdd.RDD.map(RDD.scala:271)
    at org.apache.spark.api.java.JavaRDDLike$class.map(JavaRDDLike.scala:78)
    at org.apache.spark.api.java.JavaRDD.map(JavaRDD.scala:32)
    at RDDExample.main(RDDExample.java:26)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:498)
    at com.intellij.rt.execution.application.AppMain.main(AppMain.java:147)
Caused by: java.io.NotSerializableException: preprocessor.coreNlp.Lemmatizer
    at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1184)
    at java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1548)
    at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1509)
    at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1432)
    at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1178)
    at java.io.ObjectOutputStream.defaultWriteFields(ObjectOutputStream.java:1548)
    at java.io.ObjectOutputStream.writeSerialData(ObjectOutputStream.java:1509)
    at java.io.ObjectOutputStream.writeOrdinaryObject(ObjectOutputStream.java:1432)
    at java.io.ObjectOutputStream.writeObject0(ObjectOutputStream.java:1178)
    at java.io.ObjectOutputStream.writeObject(ObjectOutputStream.java:348)
    at org.apache.spark.serializer.JavaSerializationStream.writeObject(JavaSerializer.scala:42)
    at org.apache.spark.serializer.JavaSerializerInstance.serialize(JavaSerializer.scala:73)
    at org.apache.spark.util.ClosureCleaner$.ensureSerializable(ClosureCleaner.scala:164)
    ... 11 more

Full log 完整日志

Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
17/01/15 00:45:49 INFO SecurityManager: Changing view acls to: ntsfk
17/01/15 00:45:49 INFO SecurityManager: Changing modify acls to: ntsfk
17/01/15 00:45:49 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(ntsfk); users with modify permissions: Set(ntsfk)
17/01/15 00:45:50 INFO Slf4jLogger: Slf4jLogger started
17/01/15 00:45:50 INFO Remoting: Starting remoting
17/01/15 00:45:51 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriver@localhost:64122]
17/01/15 00:45:51 INFO Utils: Successfully started service 'sparkDriver' on port 64122.
17/01/15 00:45:51 INFO SparkEnv: Registering MapOutputTracker
17/01/15 00:45:51 INFO SparkEnv: Registering BlockManagerMaster
17/01/15 00:45:51 INFO DiskBlockManager: Created local directory at F:\Local\Temp\spark-local-20170115004551-eaac
17/01/15 00:45:51 INFO MemoryStore: MemoryStore started with capacity 491.7 MB
17/01/15 00:45:52 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
17/01/15 00:45:53 INFO HttpFileServer: HTTP File server directory is F:\Local\Temp\spark-e041cd0f-83b9-46fa-b5d0-4fce800a2778
17/01/15 00:45:53 INFO HttpServer: Starting HTTP Server
17/01/15 00:45:53 INFO Utils: Successfully started service 'HTTP file server' on port 64123.
17/01/15 00:45:53 INFO Utils: Successfully started service 'SparkUI' on port 4040.
17/01/15 00:45:53 INFO SparkUI: Started SparkUI at http://DESKTOP-B29B6NA:4040
17/01/15 00:45:54 INFO AkkaUtils: Connecting to HeartbeatReceiver: akka.tcp://sparkDriver@localhost:64122/user/HeartbeatReceiver
17/01/15 00:45:55 INFO NettyBlockTransferService: Server created on 64134
17/01/15 00:45:55 INFO BlockManagerMaster: Trying to register BlockManager
17/01/15 00:45:55 INFO BlockManagerMasterActor: Registering block manager localhost:64134 with 491.7 MB RAM, BlockManagerId(<driver>, localhost, 64134)
17/01/15 00:45:55 INFO BlockManagerMaster: Registered BlockManager
17/01/15 00:45:55 INFO StanfordCoreNLP: Adding annotator tokenize
17/01/15 00:45:55 INFO TokenizerAnnotator: TokenizerAnnotator: No tokenizer type provided. Defaulting to PTBTokenizer.
17/01/15 00:45:55 INFO StanfordCoreNLP: Adding annotator ssplit
17/01/15 00:45:55 INFO StanfordCoreNLP: Adding annotator pos
Reading POS tagger model from edu/stanford/nlp/models/pos-tagger/english-left3words/english-left3words-distsim.tagger ... done [3,5 sec].
17/01/15 00:45:59 INFO StanfordCoreNLP: Adding annotator lemma

After I have exception. 我有例外之后。

Enviroment Java 1.8, Spark 2.10 环境Java 1.8,Spark 2.10

Typically the first method of choice would be to make Lemmatizer Serializable but you have to remember that serialization is not the only possible problem here. 通常,第一种选择的方法是使Lemmatizer Serializable但是您必须记住,序列化不是这里唯一可能的问题。 Spark executors depend heavily on multi-threading and any objects in closures should be thread safe. Spark执行程序在很大程度上依赖于多线程,并且闭包中的任何对象都应该是线程安全的。

If satisfying both conditions (serializability and thread-safety) an alternative solution is to create separate instances for each executor thread, for example with mapPartitions . 如果同时满足两个条件(可序列化和线程安全),则另一种解决方案是为每个执行程序线程创建单独的实例,例如,使用mapPartitions A naive solution (in general it is better to avoid collecting a whole partition) can be sketched as follows: 一个简单的解决方案(通常最好避免收集整个分区)可以描述如下:

twitsRDD.mapPartitions(iter -> {
    Lemmatizer lemmatizer = new Lemmatizer();
    List<List<String>> lemmas = new LinkedList<>();

    while (iter.hasNext()) {
        lemmas.add(lemmatizer.Execute(iter.next()));
    }

    return lemmas.iterator();
});

This should resolve serialization issues and address some, but not all, of the thread safety concerns. 这样可以解决序列化问题,并解决部分(而非全部)线程安全问题。 Since recent versions of the CoreNLP claim to be thread-safe it should good enough in your case. 由于CoreNLP的最新版本声称是线程安全的,因此在您的情况下应该足够好。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM