簡體   English   中英

PYSPARK 連接到 aws S3 時出錯:py4j.protocol.Py4JJavaError:java.lang.NoClassDefFoundError:com/amazonaws/AmazonClientException

[英]PYSPARK Error connecting to aws S3: py4j.protocol.Py4JJavaError: java.lang.NoClassDefFoundError: com/amazonaws/AmazonClientException

我在 MacOS 上使用 Scala 2.12.15 安裝了 SPARK 版本 3.3.1。 自制程序安裝還添加了 OpenJDK 64 位服務器 VM,19.0.1。

目前使用 Python 3.9

環境變量:

export JAVA_HOME=/usr/local/Cellar/openjdk/19.0.1/libexec/openjdk.jdk/Contents/Home
export SPARK_HOME=/usr/local/Cellar/apache-spark/3.3.1/libexec
export SPARK_LOCAL_DIRS=$HOME/tmp/spark    
export PYSPARK_PYTHON=/usr/local/bin/python3.9

代碼

...

conf = SparkConf()

conf.set('spark.jars.packages', 'org.apache.hadoop:hadoop-aws:3.3.4')
conf.set('fs.s3a.aws.credentials.provider','org.apache.hadoop.fs.s3a.SimpleAWSCredentialsProvider')
    #conf.set('spark.hadoop.fs.s3a.impl', 'org.apache.hadoop.fs.s3a.S3AFileSystem')
conf.set('spark.hadoop.fs.s3a.access.key', aws_source["access_key_id"])
conf.set('spark.hadoop.fs.s3a.secret.key', aws_source["secret_access_key"])
conf.set('spark.hadoop.fs.s3a.endpoint', aws_source["host"])
conf.set('s3bucket', aws_source['bucket'])

spark = SparkSession.builder.config(conf=conf).getOrCreate()
sc = spark.sparkContext

s3folder = f"s3a://{conf.get('s3bucket')}/spark/fashion/sales"
df = spark.read.options(header='true', inferSchema='true').csv(s3folder)

終端輸出

/usr/local/bin/python3.9 /Users/d051079/Library/CloudStorage/OneDrive-SAPSE/GitHub/sparkcheck/thhspark/connections.py
Warning: Ignoring non-Spark config property: s3bucket
Warning: Ignoring non-Spark config property: fs.s3a.aws.credentials.provider
:: loading settings :: url = jar:file:/usr/local/Cellar/apache-spark/3.3.1/libexec/jars/ivy-2.5.0.jar!/org/apache/ivy/core/settings/ivysettings.xml
Ivy Default Cache set to: /Users/myuser/.ivy2/cache
The jars for the packages stored in: /Users/muuser/.ivy2/jars
org.apache.hadoop#hadoop-aws added as a dependency
:: resolving dependencies :: org.apache.spark#spark-submit-parent-4f584447-37e3-49b8-880f-01011a577f68;1.0
        confs: [default]
        found org.apache.hadoop#hadoop-aws;3.3.4 in central
        found com.amazonaws#aws-java-sdk-bundle;1.12.262 in central
        found org.wildfly.openssl#wildfly-openssl;1.0.7.Final in central
:: resolution report :: resolve 172ms :: artifacts dl 9ms
        :: modules in use:
        com.amazonaws#aws-java-sdk-bundle;1.12.262 from central in [default]
        org.apache.hadoop#hadoop-aws;3.3.4 from central in [default]
        org.wildfly.openssl#wildfly-openssl;1.0.7.Final from central in [default]
        ---------------------------------------------------------------------
        |                  |            modules            ||   artifacts   |
        |       conf       | number| search|dwnlded|evicted|| number|dwnlded|
        ---------------------------------------------------------------------
        |      default     |   3   |   0   |   0   |   0   ||   3   |   0   |
        ---------------------------------------------------------------------
:: retrieving :: org.apache.spark#spark-submit-parent-4f584447-37e3-49b8-880f-01011a577f68
        confs: [default]
        0 artifacts copied, 3 already retrieved (0kB/10ms)
22/12/16 15:16:00 WARN NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Setting default log level to "WARN".
To adjust logging level use sc.setLogLevel(newLevel). For SparkR, use setLogLevel(newLevel).
Traceback (most recent call last):
  File "/Users/myuser/Library/CloudStorage/OneDrive-SAPSE/GitHub/sparkcheck/thhspark/connections.py", line 44, in <module>
    df = spark.read.options(header='true', inferSchema='true').csv(s3folder)
  File "/usr/local/Cellar/apache-spark/3.3.1/libexec/python/pyspark/sql/readwriter.py", line 535, in csv
    return self._df(self._jreader.csv(self._spark._sc._jvm.PythonUtils.toSeq(path)))
  File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/py4j/java_gateway.py", line 1321, in __call__
    return_value = get_return_value(
  File "/usr/local/Cellar/apache-spark/3.3.1/libexec/python/pyspark/sql/utils.py", line 190, in deco
    return f(*a, **kw)
  File "/Library/Frameworks/Python.framework/Versions/3.9/lib/python3.9/site-packages/py4j/protocol.py", line 326, in get_return_value
    raise Py4JJavaError(
py4j.protocol.Py4JJavaError: An error occurred while calling o43.csv.
: java.lang.NoClassDefFoundError: com/amazonaws/AmazonClientException
        at java.base/java.lang.Class.forName0(Native Method)
        at java.base/java.lang.Class.forName(Class.java:398)
        at org.apache.hadoop.conf.Configuration.getClassByNameOrNull(Configuration.java:2625)
        at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2590)
        at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2686)
        at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:3431)
        at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3466)
        at org.apache.hadoop.fs.FileSystem.access$300(FileSystem.java:174)
        at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3574)
        at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3521)
        at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:540)
        at org.apache.hadoop.fs.Path.getFileSystem(Path.java:365)
        at org.apache.spark.sql.execution.streaming.FileStreamSink$.hasMetadata(FileStreamSink.scala:53)
        at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:370)
        at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:228)
        at org.apache.spark.sql.DataFrameReader.$anonfun$load$2(DataFrameReader.scala:210)
        at scala.Option.getOrElse(Option.scala:189)
        at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:210)
        at org.apache.spark.sql.DataFrameReader.csv(DataFrameReader.scala:537)
        at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.base/java.lang.reflect.Method.invoke(Method.java:566)
        at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
        at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
        at py4j.Gateway.invoke(Gateway.java:282)
        at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
        at py4j.commands.CallCommand.execute(CallCommand.java:79)
        at py4j.ClientServerConnection.waitForCommands(ClientServerConnection.java:182)
        at py4j.ClientServerConnection.run(ClientServerConnection.java:106)
        at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: java.lang.ClassNotFoundException: com.amazonaws.AmazonClientException
        at java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:581)
        at java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:178)
        at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:522)
        ... 31 more

我認為 py4j 可能是罪魁禍首,並試圖通過重定向 JAVA_HOME 將 SPARK openjdk19 替換為 openjdk@11。 首先 pyspark 不關心並繼續使用 jdk19,其次它沒有幫助。

因為在我的罐子里我發現 hadoop 版本是:org.apache.hadoop:hadoop-aws:3.3.2 但沒有用。

故障排除頁面沒有幫助: https ://hadoop.apache.org/docs/stable/hadoop-aws/tools/hadoop-aws/troubleshooting_s3a.html

看起來你的類路徑中已經有了hadoop-aws jar,好的開始!

com.amazonaws.AmazonClientException來自aws-java-sdk-bundle jar,因此您需要將其添加到您的類路徑中。 你可以從Maven Repository獲取你想要的版本。

希望這可以幫助!

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM