[英]Hive DML transactions (Update/Delete) not working for subqueries
我知道Hive / Hadoop不是要更新/刪除,但我的要求是根據表person21的數據更新表person20。 隨着帶有ORC的Hive的進步,它支持ACID,但看起來還不成熟。
$ hive --version
以下是我為測試更新邏輯而執行的詳細步驟。
CREATE TABLE person20(
persid int,
lastname string,
firstname string)
CLUSTERED BY (
persid)
INTO 1 BUCKETS
ROW FORMAT SERDE
'org.apache.hadoop.hive.ql.io.orc.OrcSerde'
STORED AS INPUTFORMAT
'org.apache.hadoop.hive.ql.io.orc.OrcInputFormat'
OUTPUTFORMAT
'org.apache.hadoop.hive.ql.io.orc.OrcOutputFormat'
LOCATION
'hdfs://hostname.com:8020/user/hive/warehouse/person20'
TBLPROPERTIES (
'COLUMN_STATS_ACCURATE'='true',
'numFiles'='3',
'numRows'='2',
'rawDataSize'='348',
'totalSize'='1730',
'transactional'='true',
'transient_lastDdlTime'='1489668385')
插入語句:
INSERT INTO TABLE person20 VALUES (0,'PP','B'),(2,'X','Y');
選擇語句:
set hive.cli.print.header=true;
select * from person20;
persid lastname firstname
2 X Y
0 PP B
我還有另一個表是person20的副本,即person21:
CREATE TABLE person21(
persid int,
lastname string,
firstname string)
ROW FORMAT SERDE
'org.apache.hadoop.hive.serde2.lazy.LazySimpleSerDe'
STORED AS INPUTFORMAT
'org.apache.hadoop.mapred.TextInputFormat'
OUTPUTFORMAT
'org.apache.hadoop.hive.ql.io.HiveIgnoreKeyTextOutputFormat'
LOCATION
'hdfs://hostname.com:8020/user/hive/warehouse/person21'
TBLPROPERTIES (
'COLUMN_STATS_ACCURATE'='true',
'numFiles'='1',
'numRows'='2',
'rawDataSize'='11',
'totalSize'='13',
'transient_lastDdlTime'='1489668344')
插入語句:
INSERT INTO TABLE person20 VALUES (0,'SS','B'),(2,'X','Y');
選擇語句:
select * from person21;
persid lastname firstname
2 X1 Y
0 SS B
我想實現MERGE邏輯:
Merge into person20 p20 USING person21 p21
ON (p20.persid=p21.persid)
WHEN MATCHED THEN
UPDATE set p20.lastname=p21.lastname
其他選項是相關子查詢更新:
hive -e "set hive.auto.convert.join.noconditionaltask.size = 10000000; set hive.support.concurrency = true; set hive.enforce.bucketing = true; set hive.exec.dynamic.partition.mode = nonstrict; set hive.txn.manager = org.apache.hadoop.hive.ql.lockmgr.DbTxnManager; set hive.compactor.initiator.on = true;
set hive.compactor.worker.threads = 1 ; UPDATE person20 SET lastname = (select lastname from person21 where person21.lastname=person20.lastname);"
使用jar:file:/usr/lib/hive/lib/hive-common-1.1.0-cdh5.6.0.jar!/hive-log4j.properties中的配置初始化日志記錄NoViableAltException(224 @ [400:1:princeEqualExpression:( (left = priorityBitwiseOrExpression-> $ left)((KW_NOTpriorenceEqualNegatableOperator notExpr = princenceBitwiseOrExpression)-> ^(KW_NOT ^(priorenceEqualNegatableOperator $ precedenceEqualExpression $ notExpr))((EqualenceEqualOperator KW_NOT KW_IN LPAREN KW_SELECT)=>(KW_NOT KW_IN subQueryExpression)-> ^(KW_NOT ^(TOK_SUBQUERY_EXPR ^(TOK_SUBQUERY_OP KW_IN)subQueryExpression $ precedenceEqualExpression))()^(KW_NOT | |(KW_IN LPAREN KW_SELECT)=>(KW_IN subQueryExpression)-> ^(TOK_SUBQUERY_EXPR ^(TOK_SUBQUERY_OP KW_IN)subQueryExpression $ precedenceEqualExpression)| (KW_IN表達式)-> ^(TOK_FUNCTION KW_IN $ precedenceEqualExpression表達式)| (KW_NOT KW_BETWEEN(min = PriorityBitwiseOrExpression)KW_AND(max = PriorityBitwiseOrExpression))-> ^(TOK_FUNCTION Identifier [“ between”] KW_TRUE $ left $ min $ max)| (| KW_BETWEEN(min = PriorityBitwiseOrExpression)KW_AND(max = PriorityBitwiseOrExpression))-> ^(TOK_FUNCTION Identifier [“ weenween]] KW_FALSE $ left $ min $ max))* | (KW_EXISTS LPAREN KW_SELECT)=>(KW_EXISTS subQueryExpression)-> ^(TOK_SUBQUERY_EXPR ^(TOK_SUBQUERY_OP KW_EXISTS)subQueryExpression));]))在org.antlr.runtime.DFA.noViableAlt(DF.A)。 org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.precedenceEqualExpression(HiveParser_IdentifiersParser.java:8651)處的.DFA.predict(DFA.java:116)在org.apache.hadoop.hives.ql.parse.HenceParser_Ice (HiveParser_IdentifiersParser.java:9673)位於org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.precedenceAndExpression(HiveParser_IdentifiersParser.java:9792)位於org.apache.hadoop.hive.serl_serdent_serent。 :9951)在org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.atomExpression(HiveParser_679)上的org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.expression(HiveParser_IdentifiersParser.java:6567)組織 pache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.precedenceFieldExpression(HiveParser_IdentifiersParser.java:6862)at org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParpar.precedenceUnaryPrefixExpressionI(HiveParser_IdentifiersParser.precedenceUnaryPrefixExpressionI(HiveParser_IdentifiersParser.java:6862) hive.ql.parse.HiveParser_IdentifiersParser.precedenceUnarySuffixExpression(HiveParser_IdentifiersParser.java:7307)位於org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.precedenceBitwiseXorhoopsiveHiveParser_IdentifiersParser.precedenceBitwiseXorExpression(HiveParser_Iache。在org.apache.hadoop.hive.ql.parse.HiveParser_IdentifiersParser.precedencePlusExpression(HiveParser_IdentifiersParser.org.par.h.par。 org.apache.hadoop.hive.ql.parse.HiveParser.columnAssignmentClause(HiveParser.java:44206)的org.apache.hadoop.hive.q的PriorityPlusExpression(HiveParser.java:44550) 位於org.apache.hadoop.hive.ql.parse的l.parse.HiveParser.setColumnsClause(HiveParser.java:44271),位於org.apache.hadoop.hive.ql.parse的HiveParser.updateStatement(HiveParser.java:44417)。 org.apache.hadoop.hive.ql.parse.ParseDriver.parse(HiveParser.execStatement(HiveParser.java:1616)位於org.apache.hadoop.hive.ql.parse.HiveParser.statement(HiveParser.java:1062) org.apache.hadoop.hive.ql.parse.ParseDriver.parse(ParseDriver.java:166)處的org.apache.hadoop.hive.ql.Driver.compile(Driver.java:404)處的ParseDriver.java:201)在org.apache.hadoop.hive.ql.Driver.compileInternal(Driver.java:1119)的org.apache.hadoop.hive.ql.Driver.compile(Driver.java:305)處org.apache.hadoop.hive.ql.Driver.run(Driver.java:1055)的.ql.Driver.runInternal(Driver.java:1167)org.apache.hadoop.hive.ql.Driver.run(Driver)的org.apache.hadoop.hive.cli.CliDriver.processLocalCmd(CliDriver.java:207)處的org.apache.hadoop.hive.cli.CliDriver.processCmd(CliDriver.java:159)處的Java。 apache.hadoop.hive org.apache.hadoop.hive.cli.CliDriver.processLine(CliDriver.java:305)上的.cli.CliDriver.processLine(CliDriver.java:370)org.apache.hadoop.hive.cli.CliDriver.executeDriver(CliDriver .java:702),位於org.apache.hadoop.hive.cli.CliDriver.run(CliDriver.java:675),位於org.apache.hadoop.hive.cli.CliDriver.main(CliDriver.java:615)。在java.lang.reflect.Method.invoke(sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:57)在sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)處反射(NativeMethod) org.apache.hadoop.util.RunJar.run(RunJar.java:221)的方法.java:606)org.apache.hadoop.util.RunJar.main(RunJar.java:136)的方法失敗:ParseException第1行: 33 無法識別表達式規范中'select''lastname''from'附近的輸入
我認為它不支持子查詢。 相同的語句適用於常量。
hive -e "set hive.auto.convert.join.noconditionaltask.size = 10000000; set hive.support.concurrency = true; set hive.enforce.bucketing = true; set hive.exec.dynamic.partition.mode = nonstrict; set hive.txn.manager = org.apache.hadoop.hive.ql.lockmgr.DbTxnManager; set hive.compactor.initiator.on = true;
set hive.compactor.worker.threads = 1 ; UPDATE person20 SET lastname = 'PP' WHERE persid = 0;"
-此語句成功更新了記錄。
您能否幫助我找到在HIVE中執行DML /合並操作的最佳策略。
您可以通過蠻力做到這一點:
person20
但不創建ACID,在虛擬col名稱上進行了分區,並為“ dummy”創建了一個分區 person20
和person21
tmpperson20
完全相同的結構和相同的“虛擬”分區的工作表person20
INSERT INTO tmpperson20 PARTITION (dummy='dummy') SELECT p20.persid, p21.lastname, ... FROM person20 p20 JOIN person21 p21 ON p20.persid=p21.persid
INSERT INTO tmpperson20 PARTITION (dummy='dummy') SELECT * FROM person20 p20 WHERE NOT EXISTS (select p21.persid FROM person21 p21 WHERE p20.persid=p21.persid)
ALTER TABLE person20 DROP PARTITION (dummy='dummy')
ALTER TABLE person20 EXCHANGE PARTITION (dummy='dummy') WITH tmpperson20
tmpperson20
但是由於存儲,使用ACID表可能會更加棘手。
HPL / SQL實用程序隨Hive 2.x一起提供,可能可以安裝在Hive 1.x之上,但是我從來沒有機會嘗試過。 Oracle方言對Hive感到很奇怪...!
或者,您可以在循環中使用JDBC ResultSet
和PreparedStatement
開發一些自定義Java代碼。
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.