简体   繁体   English

Dbeaver异常:数据源无效

[英]Dbeaver Exception: Data Source was invalid

I am trying to work with Dbeaver and processing data via Spark Hive. 我正在尝试使用Dbeaver并通过Spark Hive处理数据。 The connection is stable as the following command works: 连接稳定,因为以下命令有效:

select * from database.table limit 100

However, as soon as I differ from the simple fetching query I get an exception. 但是,一旦我与简单的提取查询不同,我就会得到一个例外。 Eg runing the query 例如,运行查询

select count(*) from database.table limit 100

results in the exception: 结果除外:

SQL Error [2] [08S01]: org.apache.hive.service.cli.HiveSQLException: Error while processing statement: FAILED: Execution Error, return code 2 from org.apache.hadoop.hive.ql.exec.tez.TezTask. SQL错误[2] [08S01]:org.apache.hive.service.cli.HiveSQLException:处理语句时出错:FAILED:执行错误,从org.apache.hadoop.hive.ql.exec.tez.TezTask返回代码2 。 Vertex failed, vertexName=Map 1, vertexId=vertex_1526294345914_23590_12_00, diagnostics=[Vertex vertex_1526294345914_23590_12_00 [Map 1] killed/failed due to:ROOT_INPUT_INIT_FAILURE, Vertex Input: postings initializer failed, vertex=vertex_1526294345914_23590_12_00 [Map 1], com.amazon.ws.emr.hadoop.fs.shaded.com.amazonaws.services.s3.model.AmazonS3Exception: Bad Request (Service: Amazon S3; Status Code: 400; Error Code: 400 Bad Request; Request ID: 95BFFF20D13AECDA), S3 Extended Request ID: fSbzZDf/Xi0b+CL99c5DKi8GYrJ7TQXj5/WWGCiCpGa6JU5SGeoxA4lunoxPCNBJ2MPA3Hxh14M= 顶点失败,vertexName =地图1,顶点ID = vertex_1526294345914_23590_12_00,诊断信息= [顶点顶点_1526294345914_23590_12_00 [地图1]被杀死/失败,原因是:ROOT_INPUT_INIT_FAILURE,顶点输入:初始化初始化失败,顶点= vertex_1_52612345.23。 emr.hadoop.fs.shaded.com.amazonaws.services.s3.model.AmazonS3Exception:错误的请求(服务:Amazon S3;状态代码:400;错误代码:400错误的请求;请求ID:95BFFF20D13AECDA),S3扩展请求ID :fSbzZDf / Xi0b + CL99c5DKi8GYrJ7TQXj5 / WWGCiCpGa6JU5SGeoxA4lunoxPCNBJ2MPA3Hxh14M =

Can someone help me here? 有人可以帮我吗?

400/Bad Request is the S3/AWS Generic "didn't like your payload/request/auth" response. 400 /错误请求是S3 / AWS通用的“不喜欢您的有效负载/请求/身份验证”响应。 There's some details in the ASF S3A docs , but that is for the ASF connector, not the amazon one (which yours is, from the stack trace). ASF S3A文档中有一些详细信息,但这是针对ASF连接器的,而不是针对亚马逊连接器的(从堆栈跟踪中查找的)。 Bad endpoint for v4-authenticated buckets is usually problem #1, after that...who knows? 经过v4认证的存储桶的端点错误通常是问题1,在那之后……谁知道呢?

  1. try and do some basic hadoop fs -ls s3://bucket/path operations first. 首先尝试做一些基本的hadoop fs -ls s3://bucket/path操作。
  2. you can try running the cloudstore diagnostics against it; 您可以尝试针对它运行cloudstore诊断 that's my first call for debugging a client. 这是我调试客户端的第一个电话。 Its not explicitly EMR-s3-connector aware though, so it won't look at the credentials in any detail 虽然它没有明确的EMR-s3-connector知道,所以它不会详细查看凭据

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM