简体   繁体   English

Hive INSERT OVERWRITE显示错误

[英]Hive INSERT OVERWRITE showing error

I am working on a example of integrating hbase-0.98.19 with hive-1.2.1. 我正在研究将hbase-0.98.19与hive-1.2.1集成的示例。 I have created a table in hbase using the command 我已经使用以下命令在hbase中创建了一个表

CREATE TABLE hbase_table_emp(id int, name string, role string) 
STORED BY 'org.apache.hadoop.hive.hbase.HBaseStorageHandler' WITH SERDEPROPERTIES ("hbase.columns.mapping" =     ":key,cf1:name,cf1:role")
 TBLPROPERTIES ("hbase.table.name" = "emp");

Then I created 'testemp' for importing data to 'hbase_table_emp'. 然后,我创建了“ testemp”,用于将数据导入“ hbase_table_emp”。 The below code shows the method to create the 'testemp' table 以下代码显示了创建“ testemp”表的方法

create table testemp(id int, name string, role string) row format delimited fields terminated by '\t';
load data local inpath '/home/hduser/sample.txt' into table testemp;
select * from testemp;

Till now, everything works fine. 到现在为止,一切正常。 But when I run the command insert overwrite table hbase_table_emp select * from testemp; 但是当我运行命令insert overwrite table hbase_table_emp select * from testemp;

I get the following error:- 我收到以下错误:

hive> insert overwrite table hbase_table_emp select * from testemp; hive>插入覆盖表hbase_table_emp select * from testemp; Query ID = hduser_20160613131557_ddef0b47-a773-477b-94d2-5cc070eb0de6 Total jobs = 1 Launching Job 1 out of 1 Number of reduce tasks is set to 0 since there's no reduce operator java.io.IOException: org.apache.hadoop.hive.ql.metadata.HiveException: java.lang.IllegalArgumentException: Must specify table name at org.apache.hadoop.hive.ql.exec.FileSinkOperator.checkOutputSpecs(FileSinkOperator.java:1117) at org.apache.hadoop.hive.ql.io.HiveOutputFormatImpl.checkOutputSpecs(HiveOutputFormatImpl.java:67) at org.apache.hadoop.mapreduce.JobSubmitter.checkSpecs(JobSubmitter.java:564) at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:432) at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1296) at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1293) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628) at org.apache. 查询ID = hduser_20160613131557_ddef0b47-a773-477b-94d2-5cc070eb0de6总作业= 1正在启动作业1(共1个)由于没有reduce运算符java.io.IOException,reduce任务的数量被设置为0:org.apache.hadoop.hive.ql .metadata.HiveException:java.lang.IllegalArgumentException:必须在org.apache.hadoop.hive.ql.io上的org.apache.hadoop.hive.ql.exec.FileSinkOperator.checkOutputSpecs(FileSinkOperator.java:1117)指定表名org.apache.hadoop.mapreduce.JobSubmitter.checkSpecs(JobSubmitter.java:564)上的.HiveOutputFormatImpl.checkOutputSpecs(HiveOutputFormatImpl.java:67)在org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:432)上的。 org.apache.hadoop.mapreduce.Job $ 10.run(Job.java:1293)at org.apache.hadoop.mapreduce.Job $ 10.run(Job.java:1293)at java.security.AccessController.doPrivileged(本机方法)的org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628)的javax.security.auth.Subject.doAs(Subject.java:415)。 hadoop.mapreduce.Job.submit(Job.java:1293) at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:562) at org.apache.hadoop.mapred.JobClient$1.run(JobClient.java:557) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:415) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1628) at org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:557) at org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:548) at org.apache.hadoop.hive.ql.exec.mr.ExecDriver.execute(ExecDriver.java:431) at org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:137) at org.apache.hadoop.hive.ql.exec.Task.executeTask(Task.java:160) at org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:88) at org.apache.hadoop.hive.ql.Driver.launchTask(Driver.java:1653) at org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1412) at org.apache.hadoop.hive.ql.Driver.runInternal(Driver. org.apache.hadoop.mapred.JobClient $ 1.run(JobClient.java:562)上的hadoop.mapreduce.Job.submit(Job.java:1293)org.apache.hadoop.mapred.JobClient $ 1.run(JobClient。 org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:javax.security.auth.Subject.doAs(Subject.java:415)处java.security.AccessController.doPrivileged(本机方法) 1628)位于org.apache.hadoop.mapred.JobClient.submitJob(JobClient.java:548)位于org.apache.hadoop.mapred.JobClient.submitJobInternal(JobClient.java:557),位于org.apache.hadoop.hive.ql org.apache.hadoop.hive.ql.exec.mr.MapRedTask.execute(MapRedTask.java:137)上的.exec.mr.ExecDriver.execute(ExecDriver.java:431)在org.apache.hadoop.hive.ql org.apache.hadoop.hive.ql.exec.TaskRunner.runSequential(TaskRunner.java:88)的.exec.Task.executeTask(Task.java:160)org.apache.hadoop.hive.ql.Driver.launchTask (Driver.java:1653)在org.apache.hadoop.hive.ql.Driver.execute(Driver.java:1412)在org.apache.hadoop.hive.ql.Driver.runInternal(Driver。 java:1195) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1059) at org.apache.hadoop.hive.ql.Driver.run(Driver.java:1049) at org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:213) at org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:165) at org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:376) at org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:736) at org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:681) at org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:621) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:606) at org.apache.hadoop.util.RunJar.run(RunJar.java:221) at org.apache.hadoop.util.RunJar.main(RunJar.java:136) Caused by: org.apache.hadoop.hive.ql.metadata.HiveExcepti java:1195)位于org.apache.hadoop.hive.ql.Driver.run(Driver.java:1059)位于org.apache.hadoop.hive.ql.Driver.run(Driver.java:1049)位于org.apache org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:165)上的.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:213)在org.apache.hadoop.hive.cli.CliDriver org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver.java:736)上的.processLine(CliDriver.java:376)在org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:681)上),位于org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:621),位于sun.reflect.NativeMethodAccessorImpl.invoke0(本机方法),位于sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)处org.apache.hadoop.util.RunJar.run(RunJar.java:221)上java.lang.reflect.Method.invoke(Method.java:606)上的sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)在org.apache.hadoop.util.RunJar.main(RunJar.java:136)造成原因:org.apache.hadoop.hive.ql.metadata.HiveExcepti on: java.lang.IllegalArgumentException: Must specify table name at org.apache.hadoop.hive.ql.exec.FileSinkOperator.createHiveOutputFormat(FileSinkOperator.java:1139) at org.apache.hadoop.hive.ql.exec.FileSinkOperator.checkOutputSpecs(FileSinkOperator.java:1114) ... 37 more Caused by: java.lang.IllegalArgumentException: Must specify table name at org.apache.hadoop.hbase.mapreduce.TableOutputFormat.setConf(TableOutputFormat.java:193) at org.apache.hive.common.util.ReflectionUtil.setConf(ReflectionUtil.java:101) at org.apache.hive.common.util.ReflectionUtil.newInstance(ReflectionUtil.java:87) at org.apache.hadoop.hive.ql.io.HiveFileFormatUtils.getHiveOutputFormat(HiveFileFormatUtils.java:277) at org.apache.hadoop.hive.ql.io.HiveFileFormatUtils.getHiveOutputFormat(HiveFileFormatUtils.java:267) at org.apache.hadoop.hive.ql.exec.FileSinkOperator.createHiveOutputFormat(FileSinkOperator.java:1137) ... 38 more Job Submission failed with exception 'java.io.IOException(org.apache.hadoop.hive.ql.m 开启:java.lang.IllegalArgumentException:必须在org.apache.hadoop.hive.ql.exec.FileSinkOperator上的org.apache.hadoop.hive.ql.exec.FileSinkOperator.createHiveOutputFormat(FileSinkOperator.java:1139)中指定表名称。 checkOutputSpecs(FileSinkOperator.java:1114)... 37更多原因:java.lang.IllegalArgumentException:必须在org处的org.apache.hadoop.hbase.mapreduce.TableOutputFormat.setConf(TableOutputFormat.java:193)指定表名。 org.apache.hive.common.util.ReflectionUtil.newInstance(ReflectionUtil.java:87)位于org.apache.hadoop.hive.ql处的apache.hive.common.util.ReflectionUtil.setConf(ReflectionUtil.java:101)。 org.apache.hadoop.hive.ql.io.HiveFileFormatUtils.getHiveOutputFormat(HiveFileFormatUtils.java:267)的io.HiveFileFormatUtils.getHiveOutputFormat(HiveFileFormatUtils.java:277)在org.apache.hadoop.hive.ql.exec.FileSink createHiveOutputFormat(FileSinkOperator.java:1137)...另外38个作业提交失败,出现异常'java.io.IOException(org.apache.hadoop.hive.ql.m etadata.HiveException: java.lang.IllegalArgumentException: Must specify table name)' FAILED: Execution Error, return code 1 from org.apache.hadoop.hive.ql.exec.mr.MapRedTask ` etadata.HiveException:java.lang.IllegalArgumentException:必须指定表名)'FAILED:执行错误,从org.apache.hadoop.hive.ql.exec.mr.MapRedTask返回代码1。

PS: I have hbase.jar , zookeeper.jar and guava.jar included in the CLASSPATH. PS:我的CLASSPATH中包含hbase.jar,zookeeper.jar和guava.jar。

Thanks in advance. 提前致谢。

For Hive HBase integration, in order to insert data into the hbase table, you need to also specify the hbase.mapred.output.outputtable in the TBLPROPERTIES. 对于Hive HBase集成,为了将数据插入到hbase表中,还需要在TBLPROPERTIES中指定hbase.mapred.output.outputtable。

Hive HBase Integration Hive HBase集成

The hbase.mapred.output.outputtable property is optional; hbase.mapred.output.outputtable属性是可选的; it's needed if you plan to insert data to the table (the property is used by hbase.mapreduce.TableOutputFormat) 如果您打算将数据插入表中,则需要它(该属性由hbase.mapreduce.TableOutputFormat使用)

So for your table you would need to run the following: 因此,对于您的表,您需要运行以下命令:

ALTER TABLE table_name SET TBLPROPERTIES ("hbase.mapred.output.outputtable" = "emp");

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM