简体   繁体   English

Ubuntu上的Logstash-Elasticsearch-Kibana设置

[英]Logstash-Elasticsearch-Kibana Setup on Ubuntu

I am trying to setup ELK stack on my ubuntu Sandbox and struck with some issue. 我正在尝试在ubuntu沙箱上设置ELK堆栈,并遇到了一些问题。 The issue is Logstash is not sending data to Elasticsearch. 问题是Logstash没有将数据发送到Elasticsearch。 I referred the Elasticsearch documentation. 我参考了Elasticsearch文档。

Looks like Kibana and Elasticsearch connectivity is working fine, I think what Kibana is reporting is it can't find the data. 看起来Kibana和Elasticsearch的连接工作正常,我认为Kibana报告的是找不到数据。 Spent couple of hours to figure out but no luck... 花了几个小时找出来,但没有运气。

Appreciate any to fix this issue. 感谢任何解决此问题。 Thank you very much! 非常感谢你!

Here are my setup details, 这是我的设置详细信息,

Logstash Setup: Logstash设置:

sirishg@sirishg-vm:/u02/app/logstash-2.1.1/bin$ ./logstash -f /u02/app/logstash-2.1.1/first-pipeline.conf 
Settings: Default filter workers: 1
Logstash startup completed

first-pipeline.conf: 第一pipeline.conf:

        # The #  character at the beginning of a line indicates a comment. Use comments to describe your configuration.
    input {
        file {
            path => "/u02/app/logstash-tutorial-dataset.log"
            start_position => beginning
        }
    }
    filter {
        grok {
            match => { "message" => "%{COMBINEDAPACHELOG}"}
        }
        geoip {
            source => "clientip"
        }
    }
    output {
        elasticsearch {
         hosts => ["localhost:9200"]
        }
        stdout {
         codec => rubydebug
        }
    }

Elastic Search Setup: 弹性搜索设置:

Health Check Report: 健康检查报告:

{"cluster_name":"my-application","status":"yellow","timed_out":false,"number_of_nodes":1,"number_of_data_nodes":1,"active_primary_shards":1,"active_shards":1,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":1,"delayed_unassigned_shards":0,"number_of_pending_tasks":0,"number_of_in_flight_fetch":0,"task_max_waiting_in_queue_millis":0,"active_shards_percent_as_number":50.0}

Startup Logs: 启动日志:

sirishg@sirishg-vm:/u02/app/elasticsearch-2.1.1/bin$ ./elasticsearch
[2016-01-16 18:17:36,591][INFO ][node                     ] [node-1] version[2.1.1], pid[3596], build[40e2c53/2015-12-15T13:05:55Z]
[2016-01-16 18:17:36,594][INFO ][node                     ] [node-1] initializing ...
[2016-01-16 18:17:36,798][INFO ][plugins                  ] [node-1] loaded [], sites []
[2016-01-16 18:17:36,907][INFO ][env                      ] [node-1] using [1] data paths, mounts [[/ (/dev/sda1)]], net usable_space [12.6gb], net total_space [45.1gb], spins? [possibly], types [ext4]
[2016-01-16 18:17:43,349][INFO ][node                     ] [node-1] initialized
[2016-01-16 18:17:43,350][INFO ][node                     ] [node-1] starting ...
[2016-01-16 18:17:43,693][INFO ][transport                ] [node-1] publish_address {localhost/127.0.0.1:9300}, bound_addresses {127.0.0.1:9300}
[2016-01-16 18:17:43,713][INFO ][discovery                ] [node-1] my-application/8bfTdwZcSzaNC9_P2VYYvw
[2016-01-16 18:17:46,878][INFO ][cluster.service          ] [node-1] new_master {node-1}{8bfTdwZcSzaNC9_P2VYYvw}{127.0.0.1}{localhost/127.0.0.1:9300}, reason: zen-disco-join(elected_as_master, [0] joins received)
[2016-01-16 18:17:46,980][INFO ][http                     ] [node-1] publish_address {localhost/127.0.0.1:9200}, bound_addresses {127.0.0.1:9200}
[2016-01-16 18:17:46,991][INFO ][node                     ] [node-1] started
[2016-01-16 18:17:47,318][INFO ][gateway                  ] [node-1] recovered [1] indices into cluster_state
[2016-01-16 18:20:03,866][INFO ][rest.suppressed          ] /logstash-*/_mapping/field/* Params: {ignore_unavailable=false, allow_no_indices=false, index=logstash-*, include_defaults=true, fields=*, _=1452986403826}
[logstash-*] IndexNotFoundException[no such index]
    at org.elasticsearch.cluster.metadata.IndexNameExpressionResolver$WildcardExpressionResolver.resolve(IndexNameExpressionResolver.java:636)
    at org.elasticsearch.cluster.metadata.IndexNameExpressionResolver.concreteIndices(IndexNameExpressionResolver.java:133)
    at org.elasticsearch.cluster.metadata.IndexNameExpressionResolver.concreteIndices(IndexNameExpressionResolver.java:77)
    at org.elasticsearch.action.admin.indices.mapping.get.TransportGetFieldMappingsAction.doExecute(TransportGetFieldMappingsAction.java:57)
    at org.elasticsearch.action.admin.indices.mapping.get.TransportGetFieldMappingsAction.doExecute(TransportGetFieldMappingsAction.java:40)
    at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:70)
    at org.elasticsearch.client.node.NodeClient.doExecute(NodeClient.java:58)

Kibana Status: Kibana状态:

sirishg@sirishg-vm:/u02/app/kibana-4.3.1-linux-x86/bin$ ./kibana 
  log   [18:18:36.697] [info][status][plugin:kibana] Status changed from uninitialized to green - Ready
  log   [18:18:36.786] [info][status][plugin:elasticsearch] Status changed from uninitialized to yellow - Waiting for Elasticsearch
  log   [18:18:36.852] [info][status][plugin:kbn_vislib_vis_types] Status changed from uninitialized to green - Ready
  log   [18:18:36.875] [info][status][plugin:markdown_vis] Status changed from uninitialized to green - Ready
  log   [18:18:36.883] [info][status][plugin:metric_vis] Status changed from uninitialized to green - Ready
  log   [18:18:36.907] [info][status][plugin:spyModes] Status changed from uninitialized to green - Ready
  log   [18:18:36.936] [info][status][plugin:statusPage] Status changed from uninitialized to green - Ready
  log   [18:18:36.950] [info][status][plugin:table_vis] Status changed from uninitialized to green - Ready
  log   [18:18:37.078] [info][listening] Server running at http://0.0.0.0:5601
  log   [18:18:37.446] [info][status][plugin:elasticsearch] Status changed from yellow to green - Kibana index ready

Kibana UI Errors: Kibana UI错误:

Error: Please specify a default index pattern
KbnError@http://localhost:5601/bundles/commons.bundle.js:58172:21
NoDefaultIndexPattern@http://localhost:5601/bundles/commons.bundle.js:58325:6
loadDefaultIndexPattern/<@http://localhost:5601/bundles/kibana.bundle.js:97911:1
processQueue@http://localhost:5601/bundles/commons.bundle.js:42358:29
scheduleProcessQueue/<@http://localhost:5601/bundles/commons.bundle.js:42374:28
$RootScopeProvider/this.$get</Scope.prototype.$eval@http://localhost:5601/bundles/commons.bundle.js:43602:17
$RootScopeProvider/this.$get</Scope.prototype.$digest@http://localhost:5601/bundles/commons.bundle.js:43413:16
$RootScopeProvider/this.$get</Scope.prototype.$apply@http://localhost:5601/bundles/commons.bundle.js:43710:14
$LocationProvider/this.$get</<@http://localhost:5601/bundles/commons.bundle.js:39839:14
jQuery.event.dispatch@http://localhost:5601/bundles/commons.bundle.js:22720:16
jQuery.event.add/elemData.handle@http://localhost:5601/bundles/commons.bundle.js:22407:7

Logstash debug logs: Logstash调试日志:

  {:timestamp=>"2016-01-17T11:07:06.287000-0500", :message=>"Reading config file", :config_file=>"/u02/app/logstash-2.1.1/first-pipeline.conf", :level=>:debug, :file=>"logstash/agent.rb", :line=>"325", :method=>"local_config"}
{:timestamp=>"2016-01-17T11:07:06.420000-0500", :message=>"Compiled pipeline code:\n        @inputs = []\n        @filters = []\n        @outputs = []\n        @periodic_flushers = []\n        @shutdown_flushers = []\n\n          @input_file_1 = plugin(\"input\", \"file\", LogStash::Util.hash_merge_many({ \"path\" => (\"/u02/app/logstash-tutorial-dataset.log\") }, { \"start_position\" => (\"beginning\") }))\n\n          @inputs << @input_file_1\n\n          @filter_grok_2 = plugin(\"filter\", \"grok\", LogStash::Util.hash_merge_many({ \"match\" => {(\"message\") => (\"%{COMBINEDAPACHELOG}\")} }))\n\n          @filters << @filter_grok_2\n\n            @filter_grok_2_flush = lambda do |options, &block|\n              @logger.debug? && @logger.debug(\"Flushing\", :plugin => @filter_grok_2)\n\n              events = @filter_grok_2.flush(options)\n\n              return if events.nil? || events.empty?\n\n              @logger.debug? && @logger.debug(\"Flushing\", :plugin => @filter_grok_2, :events => events)\n\n                          events = @filter_geoip_3.multi_filter(events)\n  \n\n\n              events.each{|e| block.call(e)}\n            end\n\n            if @filter_grok_2.respond_to?(:flush)\n              @periodic_flushers << @filter_grok_2_flush if @filter_grok_2.periodic_flush\n              @shutdown_flushers << @filter_grok_2_flush\n            end\n\n          @filter_geoip_3 = plugin(\"filter\", \"geoip\", LogStash::Util.hash_merge_many({ \"source\" => (\"clientip\") }))\n\n          @filters << @filter_geoip_3\n\n            @filter_geoip_3_flush = lambda do |options, &block|\n              @logger.debug? && @logger.debug(\"Flushing\", :plugin => @filter_geoip_3)\n\n              events = @filter_geoip_3.flush(options)\n\n              return if events.nil? || events.empty?\n\n              @logger.debug? && @logger.debug(\"Flushing\", :plugin => @filter_geoip_3, :events => events)\n\n                \n\n              events.each{|e| block.call(e)}\n            end\n\n            if @filter_geoip_3.respond_to?(:flush)\n              @periodic_flushers << @filter_geoip_3_flush if @filter_geoip_3.periodic_flush\n              @shutdown_flushers << @filter_geoip_3_flush\n            end\n\n          @output_elasticsearch_4 = plugin(\"output\", \"elasticsearch\", LogStash::Util.hash_merge_many({ \"hosts\" => [(\"localhost:9200\")] }))\n\n          @outputs << @output_elasticsearch_4\n\n          @output_stdout_5 = plugin(\"output\", \"stdout\", LogStash::Util.hash_merge_many({ \"codec\" => (\"rubydebug\") }))\n\n          @outputs << @output_stdout_5\n\n  def filter_func(event)\n    events = [event]\n    @logger.debug? && @logger.debug(\"filter received\", :event => event.to_hash)\n              events = @filter_grok_2.multi_filter(events)\n              events = @filter_geoip_3.multi_filter(events)\n    \n    events\n  end\n  def output_func(event)\n    @logger.debug? && @logger.debug(\"output received\", :event => event.to_hash)\n    @output_elasticsearch_4.handle(event)\n    @output_stdout_5.handle(event)\n    \n  end", :level=>:debug, :file=>"logstash/pipeline.rb", :line=>"38", :method=>"initialize"}
{:timestamp=>"2016-01-17T11:07:06.426000-0500", :message=>"Plugin not defined in namespace, checking for plugin file", :type=>"input", :name=>"file", :path=>"logstash/inputs/file", :level=>:debug, :file=>"logstash/plugin.rb", :line=>"76", :method=>"lookup"}
{:timestamp=>"2016-01-17T11:07:06.451000-0500", :message=>"Plugin not defined in namespace, checking for plugin file", :type=>"codec", :name=>"plain", :path=>"logstash/codecs/plain", :level=>:debug, :file=>"logstash/plugin.rb", :line=>"76", :method=>"lookup"}
{:timestamp=>"2016-01-17T11:07:06.465000-0500", :message=>"config LogStash::Codecs::Plain/@charset = \"UTF-8\"", :level=>:debug, :file=>"logstash/config/mixin.rb", :line=>"122", :method=>"config_init"}
{:timestamp=>"2016-01-17T11:07:06.468000-0500", :message=>"config LogStash::Inputs::File/@path = [\"/u02/app/logstash-tutorial-dataset.log\"]", :level=>:debug, :file=>"logstash/config/mixin.rb", :line=>"122", :method=>"config_init"}
{:timestamp=>"2016-01-17T11:07:06.469000-0500", :message=>"config LogStash::Inputs::File/@start_position = \"beginning\"", :level=>:debug, :file=>"logstash/config/mixin.rb", :line=>"122", :method=>"config_init"}
{:timestamp=>"2016-01-17T11:07:06.472000-0500", :message=>"config LogStash::Inputs::File/@codec = <LogStash::Codecs::Plain charset=>\"UTF-8\">", :level=>:debug, :file=>"logstash/config/mixin.rb", :line=>"122", :method=>"config_init"}
{:timestamp=>"2016-01-17T11:07:06.480000-0500", :message=>"config LogStash::Inputs::File/@add_field = {}", :level=>:debug, :file=>"logstash/config/mixin.rb", :line=>"122", :method=>"config_init"}
{:timestamp=>"2016-01-17T11:07:06.481000-0500", :message=>"config LogStash::Inputs::File/@stat_interval = 1", :level=>:debug, :file=>"logstash/config/mixin.rb", :line=>"122", :method=>"config_init"}
{:timestamp=>"2016-01-17T11:07:06.492000-0500", :message=>"config LogStash::Inputs::File/@discover_interval = 15", :level=>:debug, :file=>"logstash/config/mixin.rb", :line=>"122", :method=>"config_init"}
{:timestamp=>"2016-01-17T11:07:06.493000-0500", :message=>"config LogStash::Inputs::File/@sincedb_write_interval = 15", :level=>:debug, :file=>"logstash/config/mixin.rb", :line=>"122", :method=>"config_init"}
{:timestamp=>"2016-01-17T11:07:06.496000-0500", :message=>"config LogStash::Inputs::File/@delimiter = \"\\n\"", :level=>:debug, :file=>"logstash/config/mixin.rb", :line=>"122", :method=>"config_init"}
{:timestamp=>"2016-01-17T11:07:06.498000-0500", :message=>"Plugin not defined in namespace, checking for plugin file", :type=>"filter", :name=>"grok", :path=>"logstash/filters/grok", :level=>:debug, :file=>"logstash/plugin.rb", :line=>"76", :method=>"lookup"}
{:timestamp=>"2016-01-17T11:07:06.515000-0500", :message=>"config LogStash::Filters::Grok/@match = {\"message\"=>\"%{COMBINEDAPACHELOG}\"}", :level=>:debug, :file=>"logstash/config/mixin.rb", :line=>"122", :method=>"config_init"}
{:timestamp=>"2016-01-17T11:07:06.524000-0500", :message=>"config LogStash::Filters::Grok/@add_tag = []", :level=>:debug, :file=>"logstash/config/mixin.rb", :line=>"122", :method=>"config_init"}
{:timestamp=>"2016-01-17T11:07:06.532000-0500", :message=>"config LogStash::Filters::Grok/@remove_tag = []", :level=>:debug, :file=>"logstash/config/mixin.rb", :line=>"122", :method=>"config_init"}
{:timestamp=>"2016-01-17T11:07:06.535000-0500", :message=>"config LogStash::Filters::Grok/@add_field = {}", :level=>:debug, :file=>"logstash/config/mixin.rb", :line=>"122", :method=>"config_init"}
{:timestamp=>"2016-01-17T11:07:06.536000-0500", :message=>"config LogStash::Filters::Grok/@remove_field = []", :level=>:debug, :file=>"logstash/config/mixin.rb", :line=>"122", :method=>"config_init"}

Elasticsearch Recent logs: Elasticsearch最近的日志:

    sirishg@sirishg-vm:/u02/app/elasticsearch-2.1.1/bin$ ./elasticsearch
[2016-01-17 11:00:23,467][INFO ][node                     ] [node-1] version[2.1.1], pid[3418], build[40e2c53/2015-12-15T13:05:55Z]
[2016-01-17 11:00:23,470][INFO ][node                     ] [node-1] initializing ...
[2016-01-17 11:00:23,698][INFO ][plugins                  ] [node-1] loaded [], sites []
[2016-01-17 11:00:23,853][INFO ][env                      ] [node-1] using [1] data paths, mounts [[/ (/dev/sda1)]], net usable_space [12.6gb], net total_space [45.1gb], spins? [possibly], types [ext4]
[2016-01-17 11:00:27,412][INFO ][node                     ] [node-1] initialized
[2016-01-17 11:00:27,412][INFO ][node                     ] [node-1] starting ...
[2016-01-17 11:00:27,605][INFO ][transport                ] [node-1] publish_address {localhost/127.0.0.1:9300}, bound_addresses {127.0.0.1:9300}
[2016-01-17 11:00:27,616][INFO ][discovery                ] [node-1] my-application/rd4S1ZOdQXOj3_g-N22NnQ
[2016-01-17 11:00:31,121][INFO ][cluster.service          ] [node-1] new_master {node-1}{rd4S1ZOdQXOj3_g-N22NnQ}{127.0.0.1}{localhost/127.0.0.1:9300}, reason: zen-disco-join(elected_as_master, [0] joins received)
[2016-01-17 11:00:31,259][INFO ][http                     ] [node-1] publish_address {localhost/127.0.0.1:9200}, bound_addresses {127.0.0.1:9200}
[2016-01-17 11:00:31,260][INFO ][node                     ] [node-1] started
[2016-01-17 11:00:31,830][INFO ][gateway                  ] [node-1] recovered [2] indices into cluster_state

Have you been able to get this to work? 您能够使它正常工作吗? Some comments: 一些评论:

1) The fact that you have kibana running on "0.0.0.0" sometimes is a sign of something going wrong, check configuration and connectivity with elasticsearch. 1)您有时会在“ 0.0.0.0”上运行kibana,这表明出现了问题,请检查配置和与Elasticsearch的连接。

2) what index are you putting the information into? 2)您将信息放入什么索引? logstash*? logstash *?

3) If everything else fails, update to current 2.3.* (Elasticsearch) and 4.4.* (Kibana). 3)如果其他所有操作均失败,请更新至当前的2.3。*(Elasticsearch)和4.4。*(Kibana)。

4) In order to have logstash actually capture the file and read it (and therefore send the data to Elasticsearch) you should write the file again (change file creation / modification timestamps). 4)为了使logstash实际捕获并读取文件(并将数据发送到Elasticsearch),您应该再次写入文件(更改文件创建/修改时间戳)。 That part does not always comes easy, as logstash (the file input-input) actually keeps like a pointer to the last line added to the file, or something. 这部分并不总是很容易,因为logstash(文件输入-输入)实际上就像指向添加到文件的最后一行的指针一样。

You probably got it working by now, so maybe I am blowing in the wind, but on the other hand maybe this can help someone. 您可能现在已经使它工作了,所以也许我在吹牛,但是另一方面,这可能会对某人有所帮助。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM