简体   繁体   中英

How should I load file on s3 using Spark?

I installed spark via pip install pyspark

I'm using following code to create a dataframe from a file on s3.

from pyspark.sql import SparkSession

spark = SparkSession.builder \
            .config('spark.driver.extraClassPath', '/home/ubuntu/spark/jars/aws-java-sdk-1.11.335.jar:/home/ubuntu/spark/jars/hadoop-aws-2.8.4.jar') \
            .appName("cluster").getOrCreate()
df = spark.read.load('s3a://bucket/path/to/file')

However I got an error:

--------------------------------------------------------------------------- Py4JJavaError Traceback (most recent call last) in () ----> 1 df = spark.read.load('s3a://bucket/path/to/file')

~/miniconda3/envs/audience/lib/python3.6/site-packages/pyspark/sql/readwriter.py in load(self, path, format, schema, **options) 164 self.options(**options) 165 if isinstance(path, basestring): --> 166 return self._df(self._jreader.load(path)) 167 elif path is not None: 168 if type(path) != list:

~/miniconda3/envs/audience/lib/python3.6/site-packages/py4j/java_gateway.py in call (self, *args) 1158 answer = self.gateway_client.send_command(command) 1159 return_value = get_return_value( -> 1160 answer, self.gateway_client, self.target_id, self.name) 1161 1162 for temp_arg in temp_args:

~/miniconda3/envs/audience/lib/python3.6/site-packages/pyspark/sql/utils.py in deco(*a, **kw) 61 def deco(*a, **kw): 62 try: ---> 63 return f(*a, **kw) 64 except py4j.protocol.Py4JJavaError as e: 65 s = e.java_exception.toString()

~/miniconda3/envs/audience/lib/python3.6/site-packages/py4j/protocol.py in get_return_value(answer, gateway_client, target_id, name) 318 raise Py4JJavaError( 319 "An error occurred while calling {0}{1}{2}.\\n". --> 320 format(target_id, ".", name), value) 321 else: 322 raise Py4JError(

Py4JJavaError: An error occurred while calling o220.load. : java.lang.NoClassDefFoundError: org/apache/hadoop/fs/StorageStatistics at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:348) at org.apache.hadoop.conf.Configuration.getClassByNameOrNull(Configuration.java:2134) at org.apache.hadoop.conf.Configuration.getClassByName(Configuration.java:2099) at org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2193) at org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2654) at org.apache.hadoop.fs.FileSystem.createFileSystem(FileSystem.java:2667) at org.apache.hadoop.fs.FileSystem.access$200(FileSystem.java:94) at org.apache.hadoop.fs.FileSystem$Cache.getInternal(FileSystem.java:2703) at org.apache.hadoop.fs.FileSystem$Cache.get(FileSystem.java:2685) at org.apache.hadoop.fs.FileSystem.get(FileSystem.java:373) at org.apache.hadoop.fs.Path.getFileSystem(Path.java:295) at org.apache.spark.sql.execution.streaming.FileStreamSink$.hasMetadata(FileStreamSink.scala:44) at org.apache.spark.sql.exec ution.datasources.DataSource.resolveRelation(DataSource.scala:354) at org.apache.spark.sql.DataFrameReader.loadV1Source(DataFrameReader.scala:239) at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:227) at org.apache.spark.sql.DataFrameReader.load(DataFrameReader.scala:174) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at py4j.reflection.MethodInvoker.invoke(MethodInvoker.java:244) at py4j.reflection.ReflectionEngine.invoke(ReflectionEngine.java:357) at py4j.Gateway.invoke(Gateway.java:282) at py4j.commands.AbstractCommand.invokeMethod(AbstractCommand.java:132) at py4j.commands.CallCommand.execute(CallCommand.java:79) at py4j.GatewayConnection.run(GatewayConnection.java:214) at java.lang.Thread.run(Thread.java:748) Caused by: java.lang .ClassNotFoundException: org.apache.hadoop.fs.StorageStatistics at java.net.URLClassLoader.findClass(URLClassLoader.java:381) at java.lang.ClassLoader.loadClass(ClassLoader.java:424) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:349) at java.lang.ClassLoader.loadClass(ClassLoader.java:357) ... 28 more

If I change s3a to s3 or s3n , it will ask for the aws access key. However, I already give the ec2 instance AmazonS3FullAccess in IAM.

IllegalArgumentException: 'AWS Access Key ID and Secret Access Key must be specified by setting the fs.s3.awsAccessKeyId and fs.s3.awsSecretAccessKey properties (respectively).'

Any help would be appreciated.

You need a way to expose your AWS credentials to the script.

The example below using botocore may be over reaching but saves you from needing to roll your own AWS config or credential parser.

First,

pip install botocore

Then create a session and blindly resolve your credentials. The order for the credential resolution is documented here

from pyspark.sql import SparkSession
import botocore.session

session = botocore.session.get_session()
credentials = session.get_credentials()

spark = (
    SparkSession
    .builder
    .config(
        'spark.driver.extraClassPath', 
        '/home/ubuntu/spark/jars/aws-java-sdk-1.11.335.jar:'
        '/home/ubuntu/spark/jars/hadoop-aws-2.8.4.jar')
    .config('fs.s3a.access.key', credentials.access_key)
    .config('fs.s3a.secret.key', credentials.secret_key)
    .appName("cluster")
    .getOrCreate()
)

df = spark.read.load('s3a://bucket/path/to/file')

EDIT

When using s3n filesystem client, the authentication properties are like so

.config('fs.s3n.awsAccessKeyId', credentials.access_key)
.config('fs.s3n.awsSecretAccessKey', credentials.secret_key)

The first error tells you that Spark tries to load class org.apache.hadoop.fs.StorageStatistics . Could you ensure that your Spark version fits to your Hadoop JAR ? Normally the class Spark tries to load was added in this commit https://github.com/apache/hadoop/commit/687233f20d24c29041929dd0a99d963cec54b6df#diff-114b1833bd381e88382ade201dc692e8 and regarding the release tags, released first in 3.0.0. Since you use Hadoop 2.8.4, upgrading it to 3.0.0 may be a solution.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM