简体   繁体   中英

Dbeaver Exception: Data Source was invalid

I am trying to work with Dbeaver and processing data via Spark Hive. The connection is stable as the following command works:

select * from database.table limit 100

However, as soon as I differ from the simple fetching query I get an exception. Eg runing the query

select count(*) from database.table limit 100

results in the exception:

SQL Error [2] [08S01]: org.apache.hive.service.cli.HiveSQLException: Error while processing statement: FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.tez.TezTask. Vertex failed, vertexName=Map 1, vertexId=vertex_1526294345914_23590_12_00, diagnostics=[Vertex vertex_1526294345914_23590_12_00 [Map 1] killed/failed due to:ROOT_INPUT_INIT_FAILURE, Vertex Input: postings initializer failed, vertex=vertex_1526294345914_23590_12_00 [Map 1], com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.services.s3.model.AmazonS3Exception: Bad Request (Service: Amazon S3; Status Code: 400; Error Code: 400 Bad Request; Request ID: 95BFFF20D13AECDA), S3 Extended Request ID: fSbzZDf/Xi0b+CL99c5DKi8GYrJ7TQXj5/WWGCiCpGa6JU5SGeoxA4lunoxPCNBJ2MPA3Hxh14M=

Can someone help me here?

400/Bad Request is the S3/AWS Generic "didn't like your payload/request/auth" response. There's some details in the ASF S3A docs , but that is for the ASF connector, not the amazon one (which yours is, from the stack trace). Bad endpoint for v4-authenticated buckets is usually problem #1, after that...who knows?

  1. try and do some basic hadoop fs -ls s3://bucket/path operations first.
  2. you can try running the cloudstore diagnostics against it; that's my first call for debugging a client. Its not explicitly EMR-s3-connector aware though, so it won't look at the credentials in any detail

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM