[英]Adding Hive Partition using Oozie
I'm using HPD-2.4.2 and try to add partitions to an external Hive table using an Oozie coordinator job. 我正在使用HPD-2.4.2,并尝试使用Oozie协调器作业将分区添加到外部Hive表中。 I created a coordinator that daily tiggers the following workflow:
我创建了一个协调器,该协调器每天会触发以下工作流程:
<workflow-app name="addPartition" xmlns="uri:oozie:workflow:0.4">
<start to="hive"/>
<action name="hive">
<hive2 xmlns="uri:oozie:hive2-action:0.1">
<job-tracker>${jobTracker}</job-tracker>
<name-node>${nameNode}</name-node>
<jdbc-url>jdbc:hive2://${jdbcPath}</jdbc-url>
<password>yarn</password>
<script>${appPath}/addPartition.q</script>
<param>nameNode=${nameNode}</param>
<param>dt=${dt}</param>
<param>path=${path}</param>
</hive2>
<ok to="end" />
<error to="fail" />
</action>
<kill name="fail">
<message>
Workflow failed, error message[${wf:errorMessage(wf:lastErrorNode())}]
</message>
</kill>
<end name="end" />
</workflow-app>
The executed script contains 执行的脚本包含
CREATE EXTERNAL TABLE IF NOT EXISTS visits (sid BIGINT, os STRING, browser STRING, visit_time TIMESTAMP)
PARTITIONED BY (dt STRING)
STORED AS PARQUET;
ALTER TABLE visits ADD PARTITION(dt = '${dt}') LOCATION '${nameNode}/data/parquet/visitors/${path}';
If I run the job the table is created but no partition is added. 如果我运行作业,则会创建表,但不会添加任何分区。 In yarn log I find:
在纱线日志中,我发现:
Beeline command arguments :
-u
jdbc:hive2://localhost:10000/default
-n
yarn
-p
yarn
-d
org.apache.hive.jdbc.HiveDriver
--hivevar
nameNode=hdfs://bigdata01.local:8020
--hivevar
dt=2016-01-05
--hivevar
path=2016/01/05
-f
addPartition.q
-a
delegationToken
--hiveconf
mapreduce.job.tags=oozie-1b3b2ee664df7ac9ee436379d784955a
Fetching child yarn jobs
tag id : oozie-1b3b2ee664df7ac9ee436379d784955a
Child yarn jobs are found -
=================================================================
>>> Invoking Beeline command line now >>>
[...]
0: jdbc:hive2://localhost:10000/default> ALTER TABLE visits ADD PARTITION(dt = '${dt}') LOCATION '${nameNode}/data/parquet/visitors/${path}';
It looks as if the parameters in the ALTER TABLE are not replaced, to check this I tried to call beeline directly from the CLI: 似乎未替换ALTER TABLE中的参数,要检查这一点,我尝试直接从CLI调用beeline:
beeline -u jdbc:hive2://localhost:10000/default -n yarn -p yarn -d org.apache.hive.jdbc.HiveDriver --hivevar nameNode=hdfs://bigdata01.local:8020 --hivevar dt="2016-01-03" --hivevar path="2016/01/03" -e "ALTER TABLE visits ADD PARTITION(dt='${dt}') LOCATION '${nameNode}/data/parquet/visitors/${path}';"
which results in an error: 导致错误:
Connecting to jdbc:hive2://localhost:10000/default
Connected to: Apache Hive (version 1.2.1000.2.4.2.0-258)
Driver: Hive JDBC (version 1.2.1000.2.4.2.0-258)
Transaction isolation: TRANSACTION_REPEATABLE_READ
Error: Error while processing statement: FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.DDLTask. partition spec is invalid; field dt does not exist or is empty (state=08S01,code=1)
if I run the alter statement without parameters 如果我运行不带参数的alter语句
beeline -u jdbc:hive2://localhost:10000/default -n yarn -p yarn -d org.apache.hive.jdbc.HiveDriver -e "ALTER TABLE visits ADD PARTITION(dt='2016-01-03') LOCATION 'hdfs://bigdata01.local:8020/data/parquet/visitors/2016/01/03';"
or open a beeline console with hivevars defined and execute the alter statement 或打开已定义hivevars的beeline控制台并执行alter语句
beeline -u jdbc:hive2://localhost:10000/default -n yarn -p yarn -d org.apache.hive.jdbc.HiveDriver --hivevar nameNode=hdfs://bigdata01.local:8020 --hivevar dt="2016-01-03" --hivevar path="2016/01/03"
0: jdbc:hive2://localhost:10000/default> ALTER TABLE visits ADD PARTITION(dt = '${dt}') LOCATION '${nameNode}/data/parquet/visitors/${path}';
the partition is created. 分区已创建。
Where am I wrong? 我哪里错了?
Update: 更新:
The values for the parameters in the hive2 action are defined in the oozie.properties file and the coordinator.xml hive2操作中的参数值在oozie.properties文件和coordinator.xml中定义
<property>
<name>nameNode</name>
<value>${nameNode}</value>
</property>
<property>
<name>dt</name>
<value>${coord:formatTime(coord:dateOffset(coord:nominalTime(), -1,'DAY'),'yyyy-MM-dd')}</value>
</property>
<property>
<name>path</name>
<value>${coord:formatTime(coord:dateOffset(coord:nominalTime(), -1,'DAY'),'yyyy/MM/dd')}</value>
</property>
in yarn log you find 在纱线日志中,您会发现
Parameters:
------------------------
nameNode=hdfs://bigdata01.local:8020
dt=2016-01-05
path=2016/01/05
before they are set as hivevars in the beeline call from the hive2 action. 在hive2动作的beeline调用中将它们设置为hivevars之前。
Thanks for your help but I give up. 感谢您的帮助,但我放弃了。 Instead of a hive2 action I will use a ssh action to execute beeline with a static alter statement.
我将使用ssh动作而不是hive2动作来执行带有静态alter语句的beeline。
<ssh xmlns="uri:oozie:ssh-action:0.1">
<host>${sshUser}@${sshHost}</host>
<command>"beeline"</command>
<args>-u</args>
<args>jdbc:hive2://localhost:10000/default</args>
<args>-n</args>
<args>yarn</args>
<args>-p</args>
<args>yarn</args>
<args>-d</args>
<args>org.apache.hive.jdbc.HiveDriver</args>
<args>-e</args>
<args>"ALTER TABLE visits ADD PARTITION(dt='${dt}') LOCATION '${nameNode}/data/raw/parquet/visitors/${path}';"</args>
<capture-output />
</ssh>
Finally found the problem. 终于发现了问题。 You have to use double quotes instead of single quotes ;-)
您必须使用双引号而不是单引号;-)
$ beeline -u jdbc:hive2://localhost:10000/default -n yarn -p yarn -d org.apache.hive.jdbc.HiveDriver --hivevar foo=bar -e "SELECT '${foo}' as foo;
+------+--+
| foo |
+------+--+
| |
+------+--+
beeline -u jdbc:hive2://localhost:10000/default -n yarn -p yarn -d org.apache.hive.jdbc.HiveDriver --hivevar foo=bar -e 'SELECT "${foo}" as foo;'
+------+--+
| foo |
+------+--+
| bar |
+------+--+
beeline -u jdbc:hive2://localhost:10000/default -n yarn -p yarn -d org.apache.hive.jdbc.HiveDriver --hivevar foo=bar -f selectFoo.q
+------+--+
| foo |
+------+--+
| bar |
+------+--+
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.