简体   繁体   English

Spark写入S3存储桶,给出java.lang.NoClassDefFoundError

[英]Spark write to S3 bucket giving java.lang.NoClassDefFoundError

I'm trying to integrate Spark 2.3.0 running on my Mac with S3. 我正在尝试将Mac上运行的Spark 2.3.0与S3集成在一起。 I can read/write to S3 without any problem using spark-shell. 我可以使用spark-shell毫无问题地读/写S3。 But when I try to do the same using a little Scala program that I run via sbt, I get java.lang.NoClassDefFoundError: org/apache/hadoop/fs/GlobalStorageStatistics$StorageStatisticsProvider. 但是,当我尝试使用通过sbt运行的Scala小程序执行相同操作时,出现java.lang.NoClassDefFoundError:org / apache / hadoop / fs / GlobalStorageStatistics $ StorageStatisticsProvider。

I have installed hadoop-aws 3.0.0-beta1. 我已经安装了hadoop-aws 3.0.0-beta1。 I have also set s3 access information in spark-2.3.0/conf/spark-defaults.conf: 我还在spark-2.3.0 / conf / spark-defaults.conf中设置了s3访问信息:

spark.hadoop.fs.s3a.impl                              org.apache.hadoop.fs.s3a.S3AFileSystem
spark.hadoop.fs.s3a.access.key                        XXXX
spark.hadoop.fs.s3a.secret.key                        YYYY
spark.hadoop.com.amazonaws.services.s3.enableV4       true
spark.hadoop.fs.s3a.endpoint                          s3.us-east-2.amazonaws.com
spark.hadoop.fs.s3a.fast.upload                       true
spark.hadoop.fs.s3a.encryption.enabled                true
spark.hadoop.fs.s3a.server-side-encryption-algorithm  AES256

The program compiles fine using sbt version 0.13. 该程序使用sbt版本0.13可以正常编译。

name := "S3Test"

scalaVersion := "2.11.8"

libraryDependencies += "org.apache.spark" %% "spark-core" % "2.2.0"
libraryDependencies += "org.apache.spark" %% "spark-sql" % "2.2.0"
libraryDependencies +=  "org.apache.hadoop" % "hadoop-aws" % "3.0.0-beta1"

The scala code is: 标量代码为:

import org.apache.spark.SparkConf
import org.apache.spark.sql.SparkSession
import com.amazonaws._
import com.amazonaws.auth ._
import com.amazonaws.services.s3 ._
import com.amazonaws. services.s3.model ._
import java.io._
import org.apache.hadoop.fs.FileSystem
import org.apache.hadoop.fs.s3a.S3AFileSystem
object S3Test {
    def main(args: Array[String]) = {
        val spark = SparkSession.builder().master("local").appName("Spark AWS S3 example").getOrCreate()
        import spark.implicits._
        val df  = spark.read.text("test.txt") 
        df.take(5)
        df.write.save(<s3 bucket>)
    }
}

I have set environment variables for JAVA_HOME, HADOOP_HOME, SPARK_HOME, CLASSPATH, SPARK_DIST_CLASSPATH, etc. But nothing lets me get past this error message. 我已经为JAVA_HOME,HADOOP_HOME,SPARK_HOME,CLASSPATH,SPARK_DIST_CLASSPATH等设置了环境变量。但是没有什么让我越过此错误消息。

You can't mix hadoop-* JARs, they all need to be in perfect sync. 您不能混合使用hadoop- * JAR,它们都需要完全同步。 Which means: cut all the hadoop 2.7 artifacts & replace them. 这意味着:剪切并替换所有hadoop 2.7工件。

FWIW, there isn't a significant enough difference between Hadoop 2.8 & Hadoop 3.0-beta-1 in terms of aws support, other than the s3guard DDB directory service (performance & listing through dynamo DB), that unless you need that feature, Hadoop 2.8 is going to be adequate. FWIW,在aws支持方面,除了s3guard DDB目录服务(性能和通过dynamo DB列出)以外,Hadoop 2.8和Hadoop 3.0-beta-1之间没有显着差异,除非您需要该功能,否则Hadoop 2.8就足够了。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM