简体   繁体   English

在Spark上使用S3(Frankfurt)

[英]Using S3 (Frankfurt) with Spark

Anyone is using s3 on Frankfurt using hadoop/spark 1.6.0? 有人在使用hadoop / spark 1.6.0在法兰克福使用s3吗?

I am trying to store the result of a job on s3, my dependencies are declared as follows: 我试图将作业的结果存储在s3上,我的依赖项声明如下:

"org.apache.spark" %% "spark-core" % "1.6.0" exclude("org.apache.hadoop", "hadoop-client"),
"org.apache.spark" %% "spark-sql" % "1.6.0",
"org.apache.hadoop" % "hadoop-client" % "2.7.2",
"org.apache.hadoop" % "hadoop-aws" % "2.7.2"

I have set the following configuration: 我设置了以下配置:

System.setProperty("com.amazonaws.services.s3.enableV4", "true")
sc.hadoopConfiguration.set("fs.s3a.endpoint", ""s3.eu-central-1.amazonaws.com")

When calling saveAsTextFile on my RDD it starts ok, saving everything on S3. 在RDD上调用saveAsTextFile ,它开始正常,将所有内容保存在S3上。 However after some time when it is transferring from _temporary to the final output result it yields the error: 但是,从_temporary转换到最终输出结果一段时间后,它会产生错误:

Exception in thread "main" com.amazonaws.services.s3.model.AmazonS3Exception: Status Code: 403, AWS Service: Amazon S3, AWS Request ID: XXXXXXXXXXXXXXXX, AWS Error Code: SignatureDoesNotMatch, AWS Error Message: The request signature we calculated does not match the signature you provided. Check your key and signing method., S3 Extended Request ID: XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX=
at com.amazonaws.http.AmazonHttpClient.handleErrorResponse(AmazonHttpClient.java:798)
at com.amazonaws.http.AmazonHttpClient.executeHelper(AmazonHttpClient.java:421)
at com.amazonaws.http.AmazonHttpClient.execute(AmazonHttpClient.java:232)
at com.amazonaws.services.s3.AmazonS3Client.invoke(AmazonS3Client.java:3528)
at com.amazonaws.services.s3.AmazonS3Client.copyObject(AmazonS3Client.java:1507)
at com.amazonaws.services.s3.transfer.internal.CopyCallable.copyInOneChunk(CopyCallable.java:143)
at com.amazonaws.services.s3.transfer.internal.CopyCallable.call(CopyCallable.java:131)
at com.amazonaws.services.s3.transfer.internal.CopyMonitor.copy(CopyMonitor.java:189)
at com.amazonaws.services.s3.transfer.internal.CopyMonitor.call(CopyMonitor.java:134)
at com.amazonaws.services.s3.transfer.internal.CopyMonitor.call(CopyMonitor.java:46)
at java.util.concurrent.FutureTask.run(FutureTask.java:266)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)
at java.lang.Thread.run(Thread.java:745)

If I use hadoop-client from spark package it not even start the transfer. 如果我从spark软件包中使用hadoop-client ,它甚至不会开始传输。 The error occurs randomly, sometimes it works and sometimes don't. 该错误是随机发生的,有时有效,有时则无效。

Please try to set the values below: 请尝试设置以下值:

System.setProperty("com.amazonaws.services.s3.enableV4", "true")
hadoopConf.set("fs.s3a.impl", "org.apache.hadoop.fs.s3a.S3AFileSystem")
hadoopConf.set("com.amazonaws.services.s3.enableV4", "true")
hadoopConf.set("fs.s3a.endpoint", "s3." + region + ".amazonaws.com")

please set the region where that bucket is located, in my case it was: eu-central-1 请设置该存储桶所在的区域,在我的情况下为: eu-central-1

and add dependency into gradle or in some other way: 并将依赖添加到gradle或以其他方式:

dependencies {
    compile 'org.apache.hadoop:hadoop-aws:2.7.2'
}

hope it will help. 希望会有所帮助。

In case you are using pyspark, the following worked for me 如果您使用的是pyspark,以下对我有用

aws_profile = "your_profile"
aws_region = "eu-central-1"
s3_bucket = "your_bucket"

# see https://github.com/jupyter/docker-stacks/issues/127#issuecomment-214594895
os.environ['PYSPARK_SUBMIT_ARGS'] = "--packages=org.apache.hadoop:hadoop-aws:2.7.3 pyspark-shell"

# If this doesn't work you might have to delete your ~/.ivy2 directory to reset your package cache.
# (see https://github.com/databricks/spark-redshift/issues/244#issuecomment-239950148)
import pyspark
sc=pyspark.SparkContext()
# see https://github.com/databricks/spark-redshift/issues/298#issuecomment-271834485
sc.setSystemProperty("com.amazonaws.services.s3.enableV4", "true")

# see https://stackoverflow.com/questions/28844631/how-to-set-hadoop-configuration-values-from-pyspark
hadoop_conf=sc._jsc.hadoopConfiguration()
# see https://stackoverflow.com/questions/43454117/how-do-you-use-s3a-with-spark-2-1-0-on-aws-us-east-2
hadoop_conf.set("fs.s3a.impl", "org.apache.hadoop.fs.s3a.S3AFileSystem")
hadoop_conf.set("com.amazonaws.services.s3.enableV4", "true")
hadoop_conf.set("fs.s3a.access.key", access_id)
hadoop_conf.set("fs.s3a.secret.key", access_key)

# see https://docs.aws.amazon.com/general/latest/gr/rande.html#s3_region
hadoop_conf.set("fs.s3a.endpoint", "s3." + aws_region + ".amazonaws.com")

sql=pyspark.sql.SparkSession(sc)
path = s3_bucket + "your_file_on_s3"
dataS3=sql.read.parquet("s3a://" + path)

Inspired from the others answers, running the following directly in pyspark shell produced the desired output for me: 从其他答案中得到启发,直接在pyspark shell中运行以下命令为我提供了所需的输出:

sc.setSystemProperty("com.amazonaws.services.s3.enableV4", "true") # fails without this
hc=sc._jsc.hadoopConfiguration()
hc.set("fs.s3a.impl", "org.apache.hadoop.fs.s3a.S3AFileSystem")
hc.set("com.amazonaws.services.s3.enableV4", "true")
hc.set("fs.s3a.endpoint", end_point)
hc.set("fs.s3a.access.key",access_key)
hc.set("fs.s3a.secret.key",secret_key)
data = sc.textFile("s3a://bucket/file")
data.take(3)

Choose your endpoint at: list of endpoints I was able to fetch data from Asia Pacific (Mumbai)(ap-south-1) which is a Version 4 only region. 在以下位置选择您的端点:端点列表我能够从亚太地区(孟买)(ap-south-1)提取数据,这是仅第4版区域。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM