[英]unable to read a CSV file present in AWS S3 folder locally in intelij using spark scala
I am trying to read a csv (native) file from an S3 bucket using a locally running Spark - Scala. I am able to read the file using the http protocol but I intend to use the s3a protocol.
Below is the configuration setup before the call
spark.sparkContext.hadoopConfiguration.set("fs.s3a.impl", "org.apache.hadoop.fs.s3a.S3AFileSystem") spark.sparkContext.hadoopConfiguration.set("fs.s3a.access.key", "Mykey ") spark.sparkContext.hadoopConfiguration.set("fs.s3a.secret.key", "Mysecretkey") spark.sparkContext.hadoopConfiguration.set("fs.s3a.aws.credentials.provider","org.apache.hadoop .fs.s3a.BasicAWSCredentialsProvider"); spark.sparkContext.hadoopConfiguration.set("com.amazonaws.services.s3.enableV4", "true") spark.sparkContext.hadoopConfiguration.set("fs.s3a.endpoint", "eu-west-1.amazonaws.com ") spark.sparkContext.hadoopConfiguration.set("fs.s3a.impl.disable.cache", "true")
I am getting bellow exception:
1. Exception in thread "main" java.lang.RuntimeException:
java.lang.ClassNotFoundException: Class
org.apache.hadoop.fs.s3a.S3AFileSystem not found at
org.apache.hadoop.conf.Configuration.getClass(Configuration.java:2154)
at
org.apache.hadoop.fs.FileSystem.getFileSystemClass(FileSystem.java:2580)
my spark version is: 2.3.1
scala version: 2.11
aws-java-sdk vesrion : 1.11.336
hadoop-aws :2.8.4
这是 S3 sdk lib 缺失异常,可以在https://community.hortonworks.com/articles/25523/hdp-240-and-spark-160-connecting-to-aws-s3-buckets.html 中找到更多详细信息
Basic 当您看到 ClassNotFound 异常时,它是由 JVM 类路径中缺少某些二进制文件引起的,根类加载器将从 java 运行时目录和您的应用程序存在目录加载它们,或者外部类加载器从某个给定路径加载它,仔细检查它们。 可能你需要阅读更多关于 ClassLoader 的文档,谷歌它:)
重要:类路径设置
https://cwiki.apache.org/confluence/display/HADOOP2/AmazonS3
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.