简体   繁体   English

Hive查询CLI工作正常,通过色相失败

[英]Hive query cli works, same via hue fails

I have a weird issue with hue (version 3.10). 我对色相(版本3.10)有一个奇怪的问题。

I have a very simple hive query: 我有一个非常简单的配置单元查询:

drop table if exists csv_dump;
create table csv_dump row format delimited fields terminated by ',' lines terminated by '\n' location '/user/oozie/export' as select * from sample;
  • running this query in the hive editor works 在配置单元编辑器中运行此查询的工作原理
  • running this query as an oozie workflow command line works 作为oozie工作流命令行运行此查询
  • running this query command line with beeline works 使用beeline运行此查询命令行
  • running this query via an oozie workflow from hive fails 通过Hive通过oozie工作流运行此查询失败

Fail in that case means: 在这种情况下,失败表示:

  • drop and create are not run, or at least do not have any effect 放置和创建未运行,或者至少没有任何作用
  • a prepare action in the workflow will be executed 工作流程中的准备动作将被执行
  • the hive2 step in the workflow still says succeeded 工作流程中的hive2步骤仍然表示成功
  • a following step will be executed. 接下来的步骤将被执行。

Now I did try with different users (oozie and ambari, adapting the location as relevant), with exactly the same success/failure cases. 现在,我确实尝试过使用完全相同的成功/失败案例的不同用户(oozie和ambari,根据实际情况调整位置)。

I cannot find any relevant logs, except maybe from hue: 除了色相之外,我找不到任何相关的日志:

------------------------

Beeline command arguments :
             -u
             jdbc:hive2://ip-10-0-0-139.eu-west-1.compute.internal:10000/default
             -n
             oozie
             -p
             DUMMY
             -d
             org.apache.hive.jdbc.HiveDriver
             -f
             s.q
             -a
             delegationToken
             --hiveconf
             mapreduce.job.tags=oozie-e686d7aaef4a29c020059e150d36db98

Fetching child yarn jobs
tag id : oozie-e686d7aaef4a29c020059e150d36db98
Child yarn jobs are found - 
=================================================================

>>> Invoking Beeline command line now >>>

0: jdbc:hive2://ip-10-0-0-139.eu-west-1.compu> drop table if exists csv_dump; cr 
eate table csv_dump0 row format delimited fields terminated by ',' lines termina 
ted by '\n' location '/user/ambari/export' as select * from sample;

<<< Invocation of Beeline command completed <<<

 Hadoop Job IDs executed by Beeline: 


<<< Invocation of Main class completed <<<


Oozie Launcher, capturing output data:
=======================
#
#Thu Jul 07 13:12:39 UTC 2016
hadoopJobs=


=======================

Oozie Launcher, uploading action data to HDFS sequence file: hdfs://ip-10-0-0-139.eu-west-1.compute.internal:8020/user/oozie/oozie-oozi/0000011-160707062514560-oozie-oozi-W/hive2-f2c9--hive2/action-data.seq

Oozie Launcher ends

Where I see that beeline is started, but I do not see any mapper allocated as I do command line. 我看到蜂线已启动,但没有像我在命令行中看到分配任何映射器。

Would anybody have any idea of what could go wrong? 有人会出什么问题吗?

Thanks, Guillaume 谢谢,纪尧姆

As explained by @romain in the comments, new lines need to be added in the sql script. 正如@romain在评论中所解释的,需要在sql脚本中添加新行。 Then all is good. 那一切都很好。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM