簡體   English   中英

S3存儲桶缺少redshift-spark

[英]S3 bucket missing redshift-spark

我sm tryint o使用spark-redshift從redshift讀取數據,並遇到此錯誤。 我已經在S3中創建了存儲桶,並且能夠使用足夠的憑據來訪問它。

java.sql.SQLException: Amazon Invalid operation: S3ServiceException:The specified bucket does not exist,Status 404,Error NoSuchBucket,Rid AA6E01BF9BCED7ED,ExtRid 7TQKPoWU5lMdJ9av3E0Ehzdgg+e0yRrNYaB5Q+WCef0JPm134XHeiSNk1mx4cdzp,CanRetry 1
Details:

error: S3ServiceException:The specified bucket does not exist,Status 404,Error NoSuchBucket,Rid AA6E01BF9BCED7ED,ExtRid 7TQKPoWU5lMdJ9av3E0Ehzdgg+e0yRrNYaB5Q+WCef0JPm134XHeiSNk1mx4cdzp,CanRetry 1
code: 8001
context: Listing bucket=redshift-spark.s3.amazonaws.com prefix=s3Redshift/3a312209-7d6d-4d6b-bbd4-c1a70b2e136b/
query: 0
location: s3_unloader.cpp:200
process: padbmaster [pid=4952]
-----------------------------------------------;
at com.amazon.redshift.client.messages.inbound.ErrorResponse.toErrorException(ErrorResponse.java:1830)
at com.amazon.redshift.client.PGMessagingContext.handleErrorResponse(PGMessagingContext.java:804)
at com.amazon.redshift.client.PGMessagingContext.handleMessage(PGMessagingContext.java:642)
at com.amazon.jdbc.communications.InboundMessagesPipeline.getNextMessageOfClass(InboundMessagesPipeline.java:312)
at com.amazon.redshift.client.PGMessagingContext.doMoveToNextClass(PGMessagingContext.java:1062)
at com.amazon.redshift.client.PGMessagingContext.getErrorResponse(PGMessagingContext.java:1030)
at com.amazon.redshift.client.PGClient.handleErrorsScenario2ForPrepareExecution(PGClient.java:2417)
at com.amazon.redshift.client.PGClient.handleErrorsPrepareExecute(PGClient.java:2358)
at com.amazon.redshift.client.PGClient.executePreparedStatement(PGClient.java:1358)
at com.amazon.redshift.dataengine.PGQueryExecutor.executePreparedStatement(PGQueryExecutor.java:370)
at com.amazon.redshift.dataengine.PGQueryExecutor.execute(PGQueryExecutor.java:245)
at com.amazon.jdbc.common.SPreparedStatement.executeWithParams(Unknown Source)
at com.amazon.jdbc.common.SPreparedStatement.execute(Unknown Source)
at com.databricks.spark.redshift.JDBCWrapper$$anonfun$executeInterruptibly$1.apply(RedshiftJDBCWrapper.scala:101)
at com.databricks.spark.redshift.JDBCWrapper$$anonfun$executeInterruptibly$1.apply(RedshiftJDBCWrapper.scala:101)
at com.databricks.spark.redshift.JDBCWrapper$$anonfun$2.apply(RedshiftJDBCWrapper.scala:119)
at scala.concurrent.impl.Future$PromiseCompletingRunnable.liftedTree1$1(Future.scala:24)
at scala.concurrent.impl.Future$PromiseCompletingRunnable.run(Future.scala:24)
at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1142)
at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:617)

我有在S3中創建的存儲桶

問題是火花紅移版本與amazonaws-sdk沖突。 更新pom解決了該問題。

更新了pom.xml

<dependency>
            <groupId>com.amazonaws</groupId>
            <artifactId>aws-java-sdk</artifactId>
            <version>1.10.22</version>
            <!--<version>1.7.4</version>-->
        </dependency>
     <dependency>
            <groupId>com.databricks</groupId>
            <artifactId>spark-redshift_2.10</artifactId>
            <version>0.6.0</version>
        </dependency> 

只需嘗試將存儲桶的名稱提供為“ s3:// bucket文件夾名稱>”即可,例如,如果您的存儲桶具有s3redshift / myfile之類的目錄結構,其中s3redshift是存儲桶名稱,則該地址的格式應為's3:// s3redshift / myfile”避免傳遞任何其他參數。

您是否看到了這個https://github.com/databricks/spark-redshift/issues/176

這很可能是由於存儲桶和群集位於不同的區域。

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM