[英]how to configure environment file if insert data from flink sql client to hive?
My Environment:我的环境:
component零件 | version版本 |
---|---|
Hadoop Hadoop | 3.1.2 3.1.2 |
Hive Hive | 2.3.4 2.3.4 |
Flink弗林克 | 1.12.0 1.12.0 |
Ubuntu Ubuntu | 20.04 20.04 |
I'm new to Flink SQL Client我是 Flink SQL 客户端的新手
My $FLINK_HOME/conf/ flink-hive.yaml
is here我的 $FLINK_HOME/conf/ flink-hive.yaml
在这里
It's OK for many FLINK SQL Client commands
,许多FLINK SQL Client commands
,
but when I do the following:但是当我执行以下操作时:
Flink SQL> INSERT into code_city values(1,'a','b','2017-09-15');
Flink SQL> INSERT into code_city values(1,'a','b','2017-09-15');
I got:我有:
[INFO] Submitting SQL update statement to the cluster...
[ERROR] Could not execute SQL statement. Reason:
org.apache.flink.table.api.ValidationException: Table options do not contain an option key 'connector' for discovering a connector
Problem:问题:
How to configure above flink-hive.yaml
如何配置上面flink-hive.yaml
if I want to insert data from FLINK SQL CLient into Hive Table
?如果我想insert data from FLINK SQL CLient into Hive Table
?
The Official Example.yaml is here but it's not complete,官方Example.yaml在这里但是不完整,
Thanks for your help~!谢谢你的帮助~!
There's no flink-hive.yaml
AFAK, you should config the catalog properties in sql-client-defaults.yaml
.没有flink-hive.yaml
AFAK,您应该在sql-client-defaults.yaml
中配置目录属性。
And then you need to config your HADOOP_CLASSPATH
environment parameter so that Flink can load Hadoop related jars.然后您需要配置您的HADOOP_CLASSPATH
环境参数,以便 Flink 可以加载 Hadoop 相关的 jars。
Finally you need add necessary hive connector dependency and hive dependency in your Flink/lib
, for example:最后,您需要在Flink/lib
中添加必要的 hive 连接器依赖项和 hive 依赖项,例如:
flink-sql-connector-hive-2.3.6_2.11-1.12.0.jar flink-sql-connector-hive-2.3.6_2.11-1.12.0.jar
hive-exec-2.3.2.jar hive-exec-2.3.2.jar
these jar can download from Apache Flink website.这些 jar 可以从 Flink 网站 Apache 下载。
After these steps, you can enjoy Flink + hive经过这些步骤,你就可以享受 Flink + hive
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.