简体   繁体   English

RDD不可序列化的Cassandra / Spark连接器java API

[英]RDD not serializable Cassandra/Spark connector java API

so I previously had some questions on how to query cassandra using spark in a java maven project here: Querying Data in Cassandra via Spark in a Java Maven Project 所以我之前有一些关于如何在java maven项目中使用spark查询cassandra的问题: 在Java Maven项目中通过Spark查询Cassandra中的数据

Well my question was answered and it worked, however I've run into an issue (possibly an issue). 好吧,我的问题得到了回答并且有效,但是我遇到了一个问题(可能是一个问题)。 I'm trying to now use the datastax java API. 我正在尝试使用datastax java API。 Here is my code: 这是我的代码:

package com.angel.testspark.test2;

import org.apache.commons.lang3.StringUtils;
import org.apache.spark.SparkConf;
import org.apache.spark.api.java.JavaRDD;
import org.apache.spark.api.java.JavaSparkContext;
import org.apache.spark.api.java.function.Function;

import java.io.Serializable;

import static com.datastax.spark.connector.CassandraJavaUtil.*;


public class App 
{

    // firstly, we define a bean class
    public static class Person implements Serializable {
        private Integer id;
        private String fname;
        private String lname;
        private String role;

        // Remember to declare no-args constructor
        public Person() { }

        public Integer getId() { return id; }
        public void setId(Integer id) { this.id = id; }

        public String getfname() { return fname; }
        public void setfname(String fname) { this.fname = fname; }

        public String getlname() { return lname; }
        public void setlname(String lname) { this.lname = lname; }

        public String getrole() { return role; }
        public void setrole(String role) { this.role = role; }

        // other methods, constructors, etc.
    }

    private transient SparkConf conf;
    private App(SparkConf conf) {
        this.conf = conf;
    }


    private void run() {
        JavaSparkContext sc = new JavaSparkContext(conf);
        createSchema(sc);


        sc.stop();
    }

    private void createSchema(JavaSparkContext sc) {

        JavaRDD<String> rdd = javaFunctions(sc).cassandraTable("tester", "empbyrole", Person.class)
                .where("role=?", "IT Engineer").map(new Function<Person, String>() {
                    @Override
                    public String call(Person person) throws Exception {
                        return person.toString();
                    }
                });
        System.out.println("Data as Person beans: \n" + StringUtils.join("\n", rdd.toArray()));
               }



    public static void main( String[] args )
    {
        if (args.length != 2) {
            System.err.println("Syntax: com.datastax.spark.demo.JavaDemo <Spark Master URL> <Cassandra contact point>");
            System.exit(1);
        }

        SparkConf conf = new SparkConf();
        conf.setAppName("Java API demo");
        conf.setMaster(args[0]);
        conf.set("spark.cassandra.connection.host", args[1]);

        App app = new App(conf);
        app.run();
    }
}

here is my error: 这是我的错误:

Exception in thread "main" org.apache.spark.SparkException: Job aborted: Task not serializable: java.io.NotSerializableException: com.angel.testspark.test2.App
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$abortStage$1.apply(DAGScheduler.scala:1020)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$org$apache$spark$scheduler$DAGScheduler$$abortStage$1.apply(DAGScheduler.scala:1018)
    at scala.collection.mutable.ResizableArray$class.foreach(ResizableArray.scala:59)
    at scala.collection.mutable.ArrayBuffer.foreach(ArrayBuffer.scala:47)
    at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$abortStage(DAGScheduler.scala:1018)
    at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$submitMissingTasks(DAGScheduler.scala:781)
    at org.apache.spark.scheduler.DAGScheduler.org$apache$spark$scheduler$DAGScheduler$$submitStage(DAGScheduler.scala:724)
    at org.apache.spark.scheduler.DAGScheduler.processEvent(DAGScheduler.scala:554)
    at org.apache.spark.scheduler.DAGScheduler$$anonfun$start$1$$anon$2$$anonfun$receive$1.applyOrElse(DAGScheduler.scala:190)
    at akka.actor.ActorCell.receiveMessage(ActorCell.scala:498)
    at akka.actor.ActorCell.invoke(ActorCell.scala:456)
    at akka.dispatch.Mailbox.processMailbox(Mailbox.scala:237)
    at akka.dispatch.Mailbox.run(Mailbox.scala:219)
    at akka.dispatch.ForkJoinExecutorConfigurator$AkkaForkJoinTask.exec(AbstractDispatcher.scala:386)
    at scala.concurrent.forkjoin.ForkJoinTask.doExec(ForkJoinTask.java:260)
    at scala.concurrent.forkjoin.ForkJoinPool$WorkQueue.runTask(ForkJoinPool.java:1339)
    at scala.concurrent.forkjoin.ForkJoinPool.runWorker(ForkJoinPool.java:1979)
    at scala.concurrent.forkjoin.ForkJoinWorkerThread.run(ForkJoinWorkerThread.java:107)

Now I KNOW exactly where my error is. 现在我知道我的错误究竟在哪里。 It is System.out.println("Data as Person beans: \\n" + StringUtils.join("\\n", rdd.toArray())); 它是System.out.println("Data as Person beans: \\n" + StringUtils.join("\\n", rdd.toArray())); because I need to convert rdd to an Array. 因为我需要将rdd转换为数组。 However, the API documentation SAID i should be able to do this... this is code copied and pasted from the documentation. 但是,API文档SAID i应该能够执行此操作...这是从文档中复制和粘贴的代码。 Why can I not serialize the RDD to an array? 为什么我不能将RDD序列化为数组?

I've already inserted dummy data into my cassandra using the insertions in my post that I included in the link above. 我已经使用上面链接中包含的帖子中的插入信息将伪数据插入到我的cassandra中。

Also, a previous error that I solved was when i changed all of my getters and setters to lowercase. 此外,我解决的先前错误是当我将所有getter和setter更改为小写时。 When I used capitals in them, it produced an error. 当我在其中使用大写字母时,它会产生错误。 Why can't I use capitals in my getters and setters here? 为什么我不能在我的吸气剂和制定者中使用大写字母?

Thanks, Angel 谢谢,天使

Changing public class App to public class App implements Serializable should fix the error. public class App更改为public class App implements Serializable应该修复错误。 Because a java inner class will keep a reference to the outer class, your Function object will have a reference to App . 因为java内部类将保留对外部类的引用,所以Function对象将具有对App的引用。 As Spark needs to serialize your Function object, it requires App is also serializable. 由于Spark需要序列化您的Function对象,因此它要求App也可序列化。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM