繁体   English   中英

Python 程序在 Apache Flink 中使用 Elasticsearch 作为 sink

[英]Python program to use Elasticsearch as sink in Apache Flink

我正在尝试从 kafka 主题读取数据进行一些处理并将数据转储到 elasticsearch。但是我在 python 中找不到示例,我使用 Elastisearch 作为接收器。 任何人都可以帮我做一个片段吗?

# add kafka connector dependency
    kafka_jar = os.path.join(os.path.abspath(os.path.dirname(__file__)),
                                    'flink-sql-connector-kafka_2.11-1.14.0.jar')

   
    tbl_env.get_config()\
            .get_configuration()\
            .set_string("pipeline.jars", "file://{}".format(kafka_jar))

以下是错误..

Caused by: org.apache.flink.table.api.ValidationException: Could not find any factory for identifier 'kafka' that implements 'org.apache.flink.table.factories.DynamicTableFactory' in the classpath.

Available factory identifiers are:

blackhole
datagen
filesystem
print
        at org.apache.flink.table.factories.FactoryUtil.discoverFactory(FactoryUtil.java:399)
        at org.apache.flink.table.factories.FactoryUtil.enrichNoMatchingConnectorError(FactoryUtil.java:583)
        ... 31 more

参考https://nightlies.apache.org/flink/flink-docs-release-1.14/api/python/pyflink.datastream.html#pyflink.datastream.connectors.JdbcSink

卡夫卡到 mysql

import os
from pyflink.datastream import StreamExecutionEnvironment
from pyflink.table import StreamTableEnvironment, EnvironmentSettings, DataTypes
from pyflink.table.udf import udf, TableFunction, ScalarFunction


env = StreamExecutionEnvironment.get_execution_environment()
t_env = StreamTableEnvironment.create(
   env,
   environment_settings=EnvironmentSettings.new_instance().use_blink_planner().build())

sourceKafkaDdl = """
    create table sourceKafka(
        ID varchar comment '',
        TRUCK_ID varchar comment '',
        SPEED varchar comment '',
        GPS_TIME varchar comment ''
    )comment 'get from kafka' 
    with(
        'connector' = 'kafka',
        'topic' = 'pyflink_test',        
        'properties.bootstrap.servers' = '***:9092',
        'scan.startup.mode' = 'earliest-offset',
        'format' = 'json'
    )
    """

mysqlSinkDdl = """
    CREATE TABLE mysqlSink (
        id varchar, 
        truck_id varchar
    ) 
    with (
        'connector.type' = 'jdbc',  
        'connector.url' = 'jdbc:mysql://***:***/test?autoReconnect=true&failOverReadOnly=false&useUnicode=true&characterEncoding=utf-8&useSSL=false' ,
        'connector.username' = '**' ,
        'connector.password' = '**', 
        'connector.table' = 'mysqlsink' ,
        'connector.driver' = 'com.mysql.cj.jdbc.Driver' ,
        'connector.write.flush.interval' = '5s', 
        'connector.write.flush.max-rows' = '1'
    )
"""

t_env.execute_sql(sourceKafkaDdl)
t_env.execute_sql(mysqlSinkDdl)

t_env.from_path('sourceKafka')\
   .select("ID,TRUCK_ID")\
   .insert_into("mysqlSink")

t_env.execute("pyFlink_mysql")

暂无
暂无

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM