簡體   English   中英

使用 Amazon S3 配置 Pyspark 給出 java.lang.ClassNotFoundException: com.amazonaws.auth.AWSCredentialsProvider

[英]Configuring Pyspark with Amazon S3 giving java.lang.ClassNotFoundException: com.amazonaws.auth.AWSCredentialsProvider

我正在嘗試使用 Amazon s3 配置 pyspark 但出現以下錯誤:

py4j.protocol.Py4JJavaError:調用 o66.csv 時出錯。 :java.lang.NoClassDefFoundError:com/amazonaws/auth/AWSCredentialsProvider

我配置火花如下:

class Spark():
    def __init__(self):
        self.url = settings.SPARK_MASTER_URL
        self.name = settings.SPARK_APP_NAME
        self.configuration = SparkConf().setAppName(self.name).setMaster(self.url)\
            .set("com.amazonaws.services.s3.enableV4", "true")\
            .set("spark.hadoop.fs.s3a.aws.credentials.provider", "com.amazonaws.auth.DefaultAWSCredentialsProviderChain")\
            .set("spark.hadoop.fs.s3a.impl", "org.apache.hadoop.fs.s3a.S3AFileSystem")\
            .set("spark.driver.extraClassPath", "/opt/airflow/spark/jars/hadoop-aws-3.3.2.jar,/opt/airflow/spark/jars/aws-java-sdk-s3-1.12.349.jar")\
            .set("spark.hadoop.fs.s3a.access.key", "access key here")\
            .set("spark.hadoop.fs.s3a.secret.key", "secret key here")\
            .set("spark.hadoop.fs.s3a.endpoint", "s3.eu-west-1.amazonaws.com")
        self.session = SparkSession.builder.config(conf=self.configuration).getOrCreate()

    def getSession(self):
        return self.session

    def __del__(self):
        self.session.stop()

從 s3 讀取 csv 文件時如下所示:

spark = Spark()
session = spark.getSession()
df = session.read.csv("s3a://path-here) 

我得到了錯誤:

py4j.protocol.Py4JJavaError:調用 o66.csv 時出錯。 :java.lang.NoClassDefFoundError:com/amazonaws/auth/AWSCredentialsProvider

我錯過了什么嗎? 謝謝!

Process SpawnProcess-1:
Traceback (most recent call last):
  File "/usr/lib/python3.10/multiprocessing/process.py", line 314, in _bootstrap
    self.run()
  File "/usr/lib/python3.10/multiprocessing/process.py", line 108, in run
    self._target(*self._args, **self._kwargs)
  File "/usr/lib/python3/dist-packages/uvicorn/subprocess.py", line 76, in subprocess_started
    target(sockets=sockets)
  File "/usr/lib/python3/dist-packages/uvicorn/server.py", line 68, in run
    return asyncio.run(self.serve(sockets=sockets))
  File "/usr/lib/python3.10/asyncio/runners.py", line 44, in run
    return loop.run_until_complete(main)
  File "/usr/lib/python3.10/asyncio/base_events.py", line 646, in run_until_complete
    return future.result()
  File "/usr/lib/python3/dist-packages/uvicorn/server.py", line 76, in serve
    config.load()
  File "/usr/lib/python3/dist-packages/uvicorn/config.py", line 448, in load
    self.loaded_app = import_from_string(self.app)
  File "/usr/lib/python3/dist-packages/uvicorn/importer.py", line 21, in import_from_string
    module = importlib.import_module(module_str)
  File "/usr/lib/python3.10/importlib/__init__.py", line 126, in import_module
    return _bootstrap._gcd_import(name[level:], package, level)
  File "<frozen importlib._bootstrap>", line 1050, in _gcd_import
  File "<frozen importlib._bootstrap>", line 1027, in _find_and_load
  File "<frozen importlib._bootstrap>", line 1006, in _find_and_load_unlocked
  File "<frozen importlib._bootstrap>", line 688, in _load_unlocked
  File "<frozen importlib._bootstrap_external>", line 883, in exec_module
  File "<frozen importlib._bootstrap>", line 241, in _call_with_frames_removed
  File "/home/zeinab/Desktop/rams/saiyan/DataProc/./app.py", line 13, in <module>
    df = ss.read.csv("s3a://saiyan/exports/Behumen@2fbecd8e-3a61-4d20-881a-b15e5ca054cd/2022-12-01T11:25:40Z[UTC]/outlets.csv")
  File "/home/zeinab/.local/lib/python3.10/site-packages/pyspark/sql/readwriter.py", line 535, in csv
    return self._df(self._jreader.csv(self._spark._sc._jvm.PythonUtils.toSeq(path)))
  File "/home/zeinab/.local/lib/python3.10/site-packages/py4j/java_gateway.py", line 1321, in __call__
    return_value = get_return_value(
  File "/home/zeinab/.local/lib/python3.10/site-packages/pyspark/sql/utils.py", line 190, in deco
    return f(*a, **kw)
  File "/home/zeinab/.local/lib/python3.10/site-packages/py4j/protocol.py", line 326, in get_return_value
    raise Py4JJavaError(
py4j.protocol.Py4JJavaError: An error occurred while calling o66.csv.
: java.lang.NoClassDefFoundError: com/amazonaws/auth/AWSCredentialsProvider
        at java.base/java.lang.Class.forName0(Native Method)
        at java.base/java.lang.Class.forName(Class.java:398)
        at org.apache.hadoop.conf.Configuration.getClassByNameOrNull(Configuration.java:2625)
        at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2590)
        at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2686)
        at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:3431)
        at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:3466)
        at org.apache.hadoop.fs.FileSystem.access$300(FileSystem.java:174)
        at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:3574)
        at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:3521)
        at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:540)
        at org.apache.hadoop.fs.Path.getFileSystem(Path.java:365)
        at org.apache.spark.sql.execution.streaming.FileStreamSink$.hasMetadata(FileStreamSink.scala:53)
        at org.apache.spark.sql.execution.datasources.DataSource.resolveRelation(DataSource.scala:370)
        at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:228)
        at org.apache.spark.sql.DataFrameReader.$anonfun$load$2(DataFrameReader.scala:210)
        at scala.Option.getOrElse(Option.scala:189)
        at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:210)
        at org.apache.spark.sql.DataFrameReader.csv(DataFrameReader.scala:537)
        at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke0(Native Method)
        at java.base/jdk.internal.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)
        at java.base/jdk.internal.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)
        at java.base/java.lang.reflect.Method.invoke(Method.java:566)
        at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244)
        at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357)
        at py4j.Gateway.invoke(Gateway.java:282)
        at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132)
        at py4j.commands.CallCommand.execute(CallCommand.java:79)
        at py4j.ClientServerConnection.waitForCommands(ClientServerConnection.java:182)
        at py4j.ClientServerConnection.run(ClientServerConnection.java:106)
        at java.base/java.lang.Thread.run(Thread.java:829)
Caused by: java.lang.ClassNotFoundException: com.amazonaws.auth.AWSCredentialsProvider
        at java.base/jdk.internal.loader.BuiltinClassLoader.loadClass(BuiltinClassLoader.java:581)
        at java.base/jdk.internal.loader.ClassLoaders$AppClassLoader.loadClass(ClassLoaders.java:178)
        at java.base/java.lang.ClassLoader.loadClass(ClassLoader.java:522)
        ... 31 more

我通過以下方式解決了我的問題:

  • 將 hadoop-aws jar 升級到 3.3.4

  • 將 aws-java-sdk-bundle jar 升級到 1.12.353

  • 刪除配置:

     set("spark.hadoop.fs.s3a.aws.credentials.provider", "com.amazonaws.auth.DefaultAWSCredentialsProviderChain")

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM