简体   繁体   English

Apache Flink 与 Elasticsearch 的集成

[英]Apache Flink integration with Elasticsearch

I am trying to integrate Flink with Elasticsearch 2.1.1, I am using the maven dependency我正在尝试将 Flink 与 Elasticsearch 2.1.1 集成,我正在使用 maven 依赖项

     <dependency>
        <groupId>org.apache.flink</groupId>
        <artifactId>flink-connector-elasticsearch2_2.10</artifactId>
        <version>1.1-SNAPSHOT</version>
    </dependency>

and here's the Java Code where I am reading the events from a Kafka queue (which works fine) but somehow the events are not getting posted in the Elasticsearch and there is no error either, in the below code if I change any of the settings related to port, hostname, cluster name or index name of ElasticSearch then immediately I see an error but currently it doesn't show any error nor any new documents get created in ElasticSearch这是我从 Kafka 队列中读取事件的 Java 代码(工作正常),但不知何故,事件没有在 Elasticsearch 中发布,也没有错误,如果我更改任何相关设置,则在下面的代码中到 ElasticSearch 的端口、主机名、集群名称或索引名称,然后我立即看到一个错误,但目前它没有显示任何错误,也没有在 ElasticSearch 中创建任何新文档

       StreamExecutionEnvironment env = StreamExecutionEnvironment.getExecutionEnvironment();

    // parse user parameters
    ParameterTool parameterTool = ParameterTool.fromArgs(args);

    DataStream<String> messageStream = env.addSource(new FlinkKafkaConsumer082<>(parameterTool.getRequired("topic"), new SimpleStringSchema(), parameterTool.getProperties()));

    messageStream.print();

    Map<String, String> config = new HashMap<>();
    config.put(ElasticsearchSink.CONFIG_KEY_BULK_FLUSH_MAX_ACTIONS, "1");
    config.put(ElasticsearchSink.CONFIG_KEY_BULK_FLUSH_INTERVAL_MS, "1");

    config.put("cluster.name", "FlinkDemo");

    List<InetSocketAddress> transports = new ArrayList<>();
    transports.add(new InetSocketAddress(InetAddress.getByName("localhost"), 9300));

    messageStream.addSink(new ElasticsearchSink<String>(config, transports, new TestElasticsearchSinkFunction()));

    env.execute();
}
private static class TestElasticsearchSinkFunction implements ElasticsearchSinkFunction<String> {
    private static final long serialVersionUID = 1L;

    public IndexRequest createIndexRequest(String element) {
        Map<String, Object> json = new HashMap<>();
        json.put("data", element);

        return Requests.indexRequest()
                .index("flink").id("hash"+element).source(json);
    }

    @Override
    public void process(String element, RuntimeContext ctx, RequestIndexer indexer) {
        indexer.add(createIndexRequest(element));
    }
}

I was indeed running it on the local machine and debugging as well but, the only thing I was missing is to properly configure logging, as most of elastic issues are described in "log.warn" statement.我确实在本地机器上运行它并进行调试,但是,我唯一缺少的是正确配置日志记录,因为大多数弹性问题都在“log.warn”语句中描述。 The issue was the exception inside "BulkRequestHandler.java" in elasticsearch-2.2.1 client API, which was throwing the error -"org.elasticsearch.action.ActionRequestValidationException: Validation Failed: 1: type is missing;"问题是 elasticsearch-2.2.1 客户端 API 中“BulkRequestHandler.java”中的异常,它抛出错误 -“org.elasticsearch.action.ActionRequestValidationException: Validation Failed: 1: type is missing;” As I had created the index but not an type which I find pretty strange as it should be primarily be concerned with index and create the type by default.因为我创建了索引但不是我觉得很奇怪的类型,因为它应该主要关注索引并默认创建类型。

I have found a very good example of Flink & Elasticsearch Connector我找到了一个很好的 Flink & Elasticsearch Connector例子

First Maven dependency:第一个 Maven 依赖项:

<dependency>
  <groupId>org.apache.flink</groupId>
  <artifactId>flink-connector-elasticsearch2_2.10</artifactId>
  <version>1.1-SNAPSHOT</version>
</dependency>

Second Example Java code第二个示例 Java 代码

public static void writeElastic(DataStream<String> input) {

    Map<String, String> config = new HashMap<>();

    // This instructs the sink to emit after every element, otherwise they would be buffered
    config.put("bulk.flush.max.actions", "1");
    config.put("cluster.name", "es_keira");

    try {
        // Add elasticsearch hosts on startup
        List<InetSocketAddress> transports = new ArrayList<>();
        transports.add(new InetSocketAddress("127.0.0.1", 9300)); // port is 9300 not 9200 for ES TransportClient

        ElasticsearchSinkFunction<String> indexLog = new ElasticsearchSinkFunction<String>() {
            public IndexRequest createIndexRequest(String element) {
                String[] logContent = element.trim().split("\t");
                Map<String, String> esJson = new HashMap<>();
                esJson.put("IP", logContent[0]);
                esJson.put("info", logContent[1]);

                return Requests
                        .indexRequest()
                        .index("viper-test")
                        .type("viper-log")
                        .source(esJson);
            }

            @Override
            public void process(String element, RuntimeContext ctx, RequestIndexer indexer) {
                indexer.add(createIndexRequest(element));
            }
        };

        ElasticsearchSink esSink = new ElasticsearchSink(config, transports, indexLog);
        input.addSink(esSink);
    } catch (Exception e) {
        System.out.println(e);
    }
}

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM