简体   繁体   中英

Error java.lang.NoClassDefFoundError in Spark application

I am trying to submit a JAR file to Spark using /usr/local/spark# ./bin/spark-submit --class "DataframeExample" --master local[2] ~/new/hbfinance-module-1.0-SNAPSHOT.jar / . I'm using Apache Spark 1.5.0 and The console shows the application loads but then fails with this:

/usr/local/spark# ./bin/spark-submit --class "DataframeExample --master local[2] ~/new/hbfinance-module-1.0-SNAPSHOT.jar /
Using Spark's default log4j profile: org/apache/spark/log4j-defaults.properties
16/10/12 12:42:08 INFO SparkContext: Running Spark version 1.5.0
16/10/12 12:42:09 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
16/10/12 12:42:09 INFO SecurityManager: Changing view acls to: root
16/10/12 12:42:09 INFO SecurityManager: Changing modify acls to: root
16/10/12 12:42:09 INFO SecurityManager: SecurityManager: authentication disabled; ui acls disabled; users with view permissions: Set(root); users with modify permissions: Set(root)
16/10/12 12:42:10 INFO Slf4jLogger: Slf4jLogger started
16/10/12 12:42:10 INFO Remoting: Starting remoting
16/10/12 12:42:10 INFO Remoting: Remoting started; listening on addresses :[akka.tcp://sparkDriver@178.62.18.22:44153]
16/10/12 12:42:10 INFO Utils: Successfully started service 'sparkDriver' on port 44153.
16/10/12 12:42:10 INFO SparkEnv: Registering MapOutputTracker
16/10/12 12:42:10 INFO SparkEnv: Registering BlockManagerMaster
16/10/12 12:42:10 INFO DiskBlockManager: Created local directory at /tmp/blockmgr-2416dd91-f441-4618-8141-e97257cdd17b
16/10/12 12:42:10 INFO MemoryStore: MemoryStore started with capacity 530.0 MB
16/10/12 12:42:10 INFO HttpFileServer: HTTP File server directory is /tmp/spark-f80de7bc-6613-4125-b047-eae7574df54e/httpd-a9849575-8777-4e8d-a6df-fdb2a08a0b87
16/10/12 12:42:10 INFO HttpServer: Starting HTTP Server
16/10/12 12:42:11 INFO Utils: Successfully started service 'HTTP file server' on port 44964.
16/10/12 12:42:11 INFO SparkEnv: Registering OutputCommitCoordinator
16/10/12 12:42:11 INFO Utils: Successfully started service 'SparkUI' on port 4040.
16/10/12 12:42:11 INFO SparkUI: Started SparkUI at http://178.62.18.22:4040
16/10/12 12:42:11 INFO SparkContext: Added JAR file:/root/new/hbfinance-module-1.0-SNAPSHOT.jar at http://178.62.18.22:44964/jars/hbfinance-module-1.0-SNAPSHOT.jar with timestamp 1476290531299
16/10/12 12:42:11 WARN MetricsSystem: Using default name DAGScheduler for source because spark.app.id is not set.
16/10/12 12:42:11 INFO Executor: Starting executor ID driver on host localhost
16/10/12 12:42:11 INFO Utils: Successfully started service 'org.apache.spark.network.netty.NettyBlockTransferService' on port 46409.
16/10/12 12:42:11 INFO NettyBlockTransferService: Server created on 46409
16/10/12 12:42:11 INFO BlockManagerMaster: Trying to register BlockManager
16/10/12 12:42:11 INFO BlockManagerMasterEndpoint: Registering block manager localhost:46409 with 530.0 MB RAM, BlockManagerId(driver, localhost, 46409)
16/10/12 12:42:11 INFO BlockManagerMaster: Registered BlockManager
Exception in thread "main" java.lang.NoClassDefFoundError: com/mongodb/hadoop/MongoInputFormat
    at com.hbfinance.DataframeExample.run(DataframeExample.java:47)
    at com.hbfinance.DataframeExample.main(DataframeExample.java:88)
    at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
    at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
    at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
    at java.lang.reflect.Method.invoke(Method.java:497)
    at org.apache.spark.deploy.SparkSubmit$.org$apache$spark$deploy$SparkSubmit$$runMain(SparkSubmit.scala:672)
    at org.apache.spark.deploy.SparkSubmit$.doRunMain$1(SparkSubmit.scala:180)
    at org.apache.spark.deploy.SparkSubmit$.submit(SparkSubmit.scala:205)
    at org.apache.spark.deploy.SparkSubmit$.main(SparkSubmit.scala:120)
    at org.apache.spark.deploy.SparkSubmit.main(SparkSubmit.scala)
Caused by: java.lang.ClassNotFoundException: com.mongodb.hadoop.MongoInputFormat
    at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
    at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
    ... 11 more
16/10/12 12:42:11 INFO SparkContext: Invoking stop() from shutdown hook
16/10/12 12:42:11 INFO SparkUI: Stopped Spark web UI at http://178.62.18.22:4040
16/10/12 12:42:11 INFO DAGScheduler: Stopping DAGScheduler
16/10/12 12:42:12 INFO MapOutputTrackerMasterEndpoint: MapOutputTrackerMasterEndpoint stopped!
16/10/12 12:42:12 INFO MemoryStore: MemoryStore cleared
16/10/12 12:42:12 INFO BlockManager: BlockManager stopped
16/10/12 12:42:12 INFO BlockManagerMaster: BlockManagerMaster stopped
16/10/12 12:42:12 INFO OutputCommitCoordinator$OutputCommitCoordinatorEndpoint: OutputCommitCoordinator stopped!
16/10/12 12:42:12 INFO SparkContext: Successfully stopped SparkContext
16/10/12 12:42:12 INFO ShutdownHookManager: Shutdown hook called
16/10/12 12:42:12 INFO ShutdownHookManager: Deleting directory /tmp/spark-f80de7bc-6613-4125-b047-eae7574df54e

I'm not sure where I've gone wrong. Here is my POM file:

<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance" xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/maven-v4_0_0.xsd">
<modelVersion>4.0.0</modelVersion>
<groupId>com.hbfinance</groupId>
<artifactId>hbfinance-module</artifactId>
<packaging>jar</packaging>
<version>1.0-SNAPSHOT</version>
<name>hbfinance-module</name>
<url>http://maven.apache.org</url>
<dependencies>
    <dependency>
        <groupId>junit</groupId>
        <artifactId>junit</artifactId>
        <version>3.8.1</version>
        <scope>test</scope>
    </dependency>
    <dependency>
        <groupId>org.apache.spark</groupId>
        <artifactId>spark-sql_2.11</artifactId>
        <version>1.5.1</version>
    </dependency>
    <dependency>
        <groupId>org.apache.spark</groupId>
        <artifactId>spark-core_2.11</artifactId>
        <version>1.5.1</version>
    </dependency>
    <dependency>
        <groupId>log4j</groupId>
        <artifactId>log4j</artifactId>
        <version>1.2.14</version>
    </dependency>
    <dependency>
        <groupId>org.mongodb.mongo-hadoop</groupId>
        <artifactId>mongo-hadoop-core</artifactId>
        <version>1.4.1</version>
    </dependency>
</dependencies>
<build>
    <plugins>
        <plugin>
            <groupId>org.apache.maven.plugins</groupId>
            <artifactId>maven-compiler-plugin</artifactId>
            <version>3.1</version>
            <configuration>
                <source>1.6</source>
                <target>1.6</target>
            </configuration>
        </plugin>
    </plugins>
</build>

though i should add the code aswell.

   public void run() {
    JavaSparkContext sc = new JavaSparkContext(new SparkConf());
    // Set configuration options for the MongoDB Hadoop Connector.
    Configuration mongodbConfig = new Configuration();
    // MongoInputFormat allows us to read from a live MongoDB instance.
    // We could also use BSONFileInputFormat to read BSON snapshots.
    mongodbConfig.set("mongo.job.input.format", "com.mongodb.hadoop.MongoInputFormat");

    // MongoDB connection string naming a collection to use.
    // If using BSON, use "mapred.input.dir" to configure the directory
    // where BSON files are located instead.
    mongodbConfig.set("mongo.input.uri",
      "mongodb://hadoopUser:Pocup1ne9@localhost:27017/hbdata.ppt_logs");
    // mongodbConfig.set("mongo.input.uri",
    //   "mongodb://hadoopUser:Pocup1ne9@localhost:27017/hbdata.ppa_logs");
    // mongodbConfig.set("mongo.input.uri",
    //   "mongodb://hadoopUser:Pocup1ne9@localhost:27017/hbdata.dd_logs");
    // mongodbConfig.set("mongo.input.uri",
    //   "mongodb://hadoopUser:Pocup1ne9@localhost:27017/hbdata.fav_logs");
    // mongodbConfig.set("mongo.input.uri",
    //   "mongodb://hadoopUser:Pocup1ne9@localhost:27017/hbdata.pps_logs");

    // Create an RDD backed by the MongoDB collection.
    JavaPairRDD<Object, BSONObject> documents = sc.newAPIHadoopRDD(
      mongodbConfig,            // Configuration
      MongoInputFormat.class,   // InputFormat: read from a live cluster.
      Object.class,             // Key class
      BSONObject.class          // Value class
    );

    JavaRDD<AppLog> logs = documents.map(

      new Function<Tuple2<Object, BSONObject>, AppLog>() {

          public AppLog call(final Tuple2<Object, BSONObject> tuple) {
              AppLog log = new AppLog();
              BSONObject header =
                (BSONObject) tuple._2().get("headers");

              log.setTarget((String) header.get("target"));
              log.setAction((String) header.get("action"));

              return log;
          }
      }
    );

    SQLContext sqlContext = new org.apache.spark.sql.SQLContext(sc);

    DataFrame logsSchema = sqlContext.createDataFrame(logs, AppLog.class);
    logsSchema.registerTempTable("logs");

    DataFrame groupedMessages = sqlContext.sql(
      "select target, action, Count(*) from logs group by target, action");
      // "SELECT to, body FROM messages WHERE to = \"eric.bass@enron.com\"");



    groupedMessages.show();

    logsSchema.printSchema();
}

public static void main(final String[] args) {
    new DataframeExample().run();
}

Try and open hbfinance-module-1.0-SNAPSHOT.jar in an archive explorer and see what class files it contains. By default, Maven is designed to only include your own class files in the built jar, not dependencies. When you run locally on your machine, the dependencies already exist on the classpath so you don't see any error, but when you sent it to Spark, the files are missing and an exception is thrown.

You can solve this in two ways:

  1. Send all the jar-files for all your dependencies to Spark following these steps .

  2. Build an "uber jar" of hbfinance-module using the Maven Shade Plugin following these steps .

A simple way to get around this error is to export the jar from Eclipse, but in the "Library Handling" section make sure to check "Extract required libraries into generated".

This will will give you an inflated jar, but simplifies the jar references on spark.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM