[英]Pyspark s3 error : java.lang.NoClassDefFoundError: com/amazonaws/AmazonServiceException
I think I'm encountering jar incompatibility.我想我遇到了 jar 不兼容问题。 I'm using the following jar files to build a spark cluster:我正在使用以下 jar 文件来构建 spark 集群:
from pyspark.sql import SparkSession, SQLContext
from pyspark.sql.types import *
from pyspark.sql.functions import *
import sys
spark = (SparkSession.builder
.appName("AuthorsAges")
.appName('SparkCassandraApp')
.getOrCreate())
spark._jsc.hadoopConfiguration().set("fs.s3a.access.key", "access-key")
spark._jsc.hadoopConfiguration().set("fs.s3a.secret.key", "secret-key")
spark._jsc.hadoopConfiguration().set("fs.s3a.impl","org.apache.hadoop.fs.s3a.S3AFileSystem")
spark._jsc.hadoopConfiguration().set("com.amazonaws.services.s3.enableV4", "true")
spark._jsc.hadoopConfiguration().set("fs.s3a.aws.credentials.provider","org.apache.hadoop.fs.s3a.BasicAWSCredentialsProvider")
spark._jsc.hadoopConfiguration().set("fs.s3a.endpoint", "")
input_file='s3a://spark-test-data/Fire_Department_Calls_for_Service.csv'
file_schema = StructType([StructField("Call_Number",StringType(),True),
StructField("Unit_ID",StringType(),True),
StructField("Incident_Number",StringType(),True),
...
...
# Read file into a Spark DataFrame
input_df = (spark.read.format("csv") \
.option("header", "true") \
.schema(file_schema) \
.load(input_file))
The code fails when it starts to execute the spark.read.format.代码在开始执行 spark.read.format 时失败。 It appears that it can't find the class. java.lang.NoClassDefFoundError: com/amazonaws/AmazonServiceException.似乎找不到 class.java.lang.NoClassDefFoundError: com/amazonaws/AmazonServiceException。
My spark-defaults.conf is configured as follows:我的spark-defaults.conf配置如下:
spark.jars.packages com.amazonaws:aws-java-sdk:1.11.885,org.apache.hadoop:hadoop-aws:2.7.4
I would appreciate if someone can help me.如果有人可以帮助我,我将不胜感激。 Any ideas?有任何想法吗?
Traceback (most recent call last):
File "<stdin>", line 5, in <module>
File "/usr/local/spark/spark-3.0.1-bin-hadoop2.7/python/pyspark/sql/readwriter.py", line 178, in load
return self._df(self._jreader.load(path))
File "/usr/local/lib/python3.6/site-packages/py4j/java_gateway.py", line 1305, in __call__
answer, self.gateway_client, self.target_id, self.name)
File "/usr/local/spark/spark-3.0.1-bin-hadoop2.7/python/pyspark/sql/utils.py", line 128, in deco
return f(*a, **kw)
File "/usr/local/lib/python3.6/site-packages/py4j/protocol.py", line 328, in get_return_value
format(target_id, ".", name), value)
py4j.protocol.Py4JJavaError: An error occurred while calling o51.load.
: java.lang.NoClassDefFoundError: com/amazonaws/AmazonServiceException
at java.lang.Class.forName0(Native Method)
at java.lang.Class.forName(Class.java:348)
at org.apache.hadoop.conf.Configuration.getClassByNameOrNull(Configuration.java:2134)
at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2099)
at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2195)
at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2654)
at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2667)
at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:94)
at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2703)
at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2685)
at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:373)
at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295)
at org.apache.spark.sql.execution.streaming.FileStreamSink$.hasMetadata(FileStreamSink.scala:46)
at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:366)
at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:297)
at org.apache.spark.sql.DataFrameReader.$anonfun$load$2(DataFrameReader.scala:286)
at scala.Option.getOrElse(Option.scala:189)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:286)
at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:232)
at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
at java.lang.reflect.Method.invoke(Method.java:498)
at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
at py4j.Gateway.invoke(Gateway.java:282)
at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
at py4j.commands.CallCommand.execute(CallCommand.java:79)
at py4j.GatewayConnection.run(GatewayConnection.java:238)
at java.lang.Thread.run(Thread.java:748)
Caused by: java.lang.ClassNotFoundException: com.amazonaws.AmazonServiceException
at java.net.URLClassLoader.findClass(URLClassLoader.java:382)
at java.lang.ClassLoader.loadClass(ClassLoader.java:418)
at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:352)
at java.lang.ClassLoader.loadClass(ClassLoader.java:351)
... 30 more
hadoop-aws 2.7.4 uses aws-java-sdk 1.7.4 that isn't completely compatible with newer versions, so if you use the newer version of aws-java-sdk, then Hadoop can't find required classes. hadoop-aws 2.7.4 使用的 aws-java-sdk 1.7.4 与较新版本不完全兼容,因此如果您使用较新版本的 aws-java-sdk,则 Hadoop 无法找到所需的类。 You have following choice:您有以下选择:
hadoop-3.1
profile, as described in documentation使用hadoop-3.1
配置文件使用 Hadoop 3 编译 Spark 2.4,如文档中所述I encountered the same problem and I was able to resolve it thanks to https://notadatascientist.com/running-apache-spark-and-s3-locally/我遇到了同样的问题,感谢https://notadatascientist.com/running-apache-spark-and-s3-locally/ ,我得以解决
steps to follow:要遵循的步骤:
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.