[英]No FileSystem for scheme "s3" when trying to read a list of files with Spark from EC2
I'm trying to provide a list of files for spark to read as and when it needs them (which is why I'd rather not use boto or whatever else to pre-download all the files onto the instance and only then read them into spark "locally").我正在尝试提供一个文件列表供 spark 在需要时读取(这就是为什么我宁愿不使用 boto 或其他任何东西将所有文件预下载到实例上然后才将它们读入火花“本地”)。
os.environ['PYSPARK_SUBMIT_ARGS'] = "--master local[3] pyspark-shell"
spark = SparkSession.builder.getOrCreate()
spark.sparkContext._jsc.hadoopConfiguration().set('fs.s3.access.key', credentials['AccessKeyId'])
spark.sparkContext._jsc.hadoopConfiguration().set('fs.s3.access.key', credentials['SecretAccessKey'])
spark.read.json(['s3://url/3521.gz', 's3://url/2734.gz'])
No idea what local[3]
is about but without this --master
flag, I was getting another exception:不知道
local[3]
是关于什么的,但是没有这个--master
标志,我得到了另一个异常:
Exception: Java gateway process exited before sending the driver its port number.
Now, I'm getting this:现在,我得到这个:
Py4JJavaError: An error occurred while calling o37.json.
: org.apache.hadoop.fs.UnsupportedFileSystemException: No FileSystem for scheme "s3"
...
Not sure what o37.json
refers to here but it probably doesn't matter.不确定
o37.json
在这里指的是什么,但这可能无关紧要。
I saw a bunch of answers to similar questions suggesting an addition of flags like:我看到了一堆类似问题的答案,建议添加如下标志:
os.environ['PYSPARK_SUBMIT_ARGS'] = "--packages com.amazonaws:aws-java-sdk-pom:1.10.34,org.apache.hadoop:hadoop-aws:2.7.2 pyspark-shell"
I tried prepending it and appending it to the other flag but it doesn't work.我尝试将它放在前面并将其附加到另一个标志,但它不起作用。
Just like the many variations I see in other answers and elsewhere on the inte.net (with different packages and versions), for example:就像我在其他答案和 inte.net 其他地方看到的许多变体一样(具有不同的包和版本),例如:
os.environ['PYSPARK_SUBMIT_ARGS'] = '--master local[*] --jars spark-snowflake_2.12-2.8.4-spark_3.0.jar,postgresql-42.2.19.jar,mysql-connector-java-8.0.23.jar,hadoop-aws-3.2.2,aws-java-sdk-bundle-1.11.563.jar'
A typical example for reading files from S3 is as below -从 S3 读取文件的典型示例如下 -
Additional you can go through this answer to ensure the minimalistic structure and necessary modules are in place - java.io.IOException: No FileSystem for scheme: s3另外你可以通过这个答案 go 来确保简约的结构和必要的模块到位 - java.io.IOException: No FileSystem for scheme: s3
os.environ['PYSPARK_SUBMIT_ARGS'] = "--packages=com.amazonaws:aws-java-sdk-bundle:1.11.375,org.apache.hadoop:hadoop-aws:3.2.0 pyspark-shell"
sc = SparkContext.getOrCreate()
sql = SQLContext(sc)
hadoop_conf = sc._jsc.hadoopConfiguration()
config = configparser.ConfigParser()
config.read(os.path.expanduser("~/.aws/credentials"))
access_key = config.get("****", "aws_access_key_id")
secret_key = config.get("****", "aws_secret_access_key")
session_key = config.get("****", "aws_session_token")
hadoop_conf.set("fs.s3.aws.credentials.provider", "org.apache.hadoop.fs.s3.TemporaryAWSCredentialsProvider")
hadoop_conf.set("fs.s3a.access.key", access_key)
hadoop_conf.set("fs.s3a.secret.key", secret_key)
hadoop_conf.set("fs.s3a.session.token", session_key)
s3_path = "s3a://xxxx/yyyy/zzzz/"
sparkDF = sql.read.parquet(s3_path)
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.