簡體   English   中英

Ubuntu上的Logstash-Elasticsearch-Kibana設置

[英]Logstash-Elasticsearch-Kibana Setup on Ubuntu

我正在嘗試在ubuntu沙箱上設置ELK堆棧,並遇到了一些問題。 問題是Logstash沒有將數據發送到Elasticsearch。 我參考了Elasticsearch文檔。

看起來Kibana和Elasticsearch的連接工作正常,我認為Kibana報告的是找不到數據。 花了幾個小時找出來,但沒有運氣。

感謝任何解決此問題。 非常感謝你!

這是我的設置詳細信息,

Logstash設置:

sirishg@sirishg-vm:/u02/app/logstash-2.1.1/bin$ ./logstash -f /u02/app/logstash-2.1.1/first-pipeline.conf 
Settings: Default filter workers: 1
Logstash startup completed

第一pipeline.conf:

        # The #  character at the beginning of a line indicates a comment. Use comments to describe your configuration.
    input {
        file {
            path => "/u02/app/logstash-tutorial-dataset.log"
            start_position => beginning
        }
    }
    filter {
        grok {
            match => { "message" => "%{COMBINEDAPACHELOG}"}
        }
        geoip {
            source => "clientip"
        }
    }
    output {
        elasticsearch {
         hosts => ["localhost:9200"]
        }
        stdout {
         codec => rubydebug
        }
    }

彈性搜索設置:

健康檢查報告:

{"cluster_name":"my-application","status":"yellow","timed_out":false,"number_of_nodes":1,"number_of_data_nodes":1,"active_primary_shards":1,"active_shards":1,"relocating_shards":0,"initializing_shards":0,"unassigned_shards":1,"delayed_unassigned_shards":0,"number_of_pending_tasks":0,"number_of_in_flight_fetch":0,"task_max_waiting_in_queue_millis":0,"active_shards_percent_as_number":50.0}

啟動日志:

sirishg@sirishg-vm:/u02/app/elasticsearch-2.1.1/bin$ ./elasticsearch
[2016-01-16 18:17:36,591][INFO ][node                     ] [node-1] version[2.1.1], pid[3596], build[40e2c53/2015-12-15T13:05:55Z]
[2016-01-16 18:17:36,594][INFO ][node                     ] [node-1] initializing ...
[2016-01-16 18:17:36,798][INFO ][plugins                  ] [node-1] loaded [], sites []
[2016-01-16 18:17:36,907][INFO ][env                      ] [node-1] using [1] data paths, mounts [[/ (/dev/sda1)]], net usable_space [12.6gb], net total_space [45.1gb], spins? [possibly], types [ext4]
[2016-01-16 18:17:43,349][INFO ][node                     ] [node-1] initialized
[2016-01-16 18:17:43,350][INFO ][node                     ] [node-1] starting ...
[2016-01-16 18:17:43,693][INFO ][transport                ] [node-1] publish_address {localhost/127.0.0.1:9300}, bound_addresses {127.0.0.1:9300}
[2016-01-16 18:17:43,713][INFO ][discovery                ] [node-1] my-application/8bfTdwZcSzaNC9_P2VYYvw
[2016-01-16 18:17:46,878][INFO ][cluster.service          ] [node-1] new_master {node-1}{8bfTdwZcSzaNC9_P2VYYvw}{127.0.0.1}{localhost/127.0.0.1:9300}, reason: zen-disco-join(elected_as_master, [0] joins received)
[2016-01-16 18:17:46,980][INFO ][http                     ] [node-1] publish_address {localhost/127.0.0.1:9200}, bound_addresses {127.0.0.1:9200}
[2016-01-16 18:17:46,991][INFO ][node                     ] [node-1] started
[2016-01-16 18:17:47,318][INFO ][gateway                  ] [node-1] recovered [1] indices into cluster_state
[2016-01-16 18:20:03,866][INFO ][rest.suppressed          ] /logstash-*/_mapping/field/* Params: {ignore_unavailable=false, allow_no_indices=false, index=logstash-*, include_defaults=true, fields=*, _=1452986403826}
[logstash-*] IndexNotFoundException[no such index]
    at org.elasticsearch.cluster.metadata.IndexNameExpressionResolver$WildcardExpressionResolver.resolve(IndexNameExpressionResolver.java:636)
    at org.elasticsearch.cluster.metadata.IndexNameExpressionResolver.concreteIndices(IndexNameExpressionResolver.java:133)
    at org.elasticsearch.cluster.metadata.IndexNameExpressionResolver.concreteIndices(IndexNameExpressionResolver.java:77)
    at org.elasticsearch.action.admin.indices.mapping.get.TransportGetFieldMappingsAction.doExecute(TransportGetFieldMappingsAction.java:57)
    at org.elasticsearch.action.admin.indices.mapping.get.TransportGetFieldMappingsAction.doExecute(TransportGetFieldMappingsAction.java:40)
    at org.elasticsearch.action.support.TransportAction.execute(TransportAction.java:70)
    at org.elasticsearch.client.node.NodeClient.doExecute(NodeClient.java:58)

Kibana狀態:

sirishg@sirishg-vm:/u02/app/kibana-4.3.1-linux-x86/bin$ ./kibana 
  log   [18:18:36.697] [info][status][plugin:kibana] Status changed from uninitialized to green - Ready
  log   [18:18:36.786] [info][status][plugin:elasticsearch] Status changed from uninitialized to yellow - Waiting for Elasticsearch
  log   [18:18:36.852] [info][status][plugin:kbn_vislib_vis_types] Status changed from uninitialized to green - Ready
  log   [18:18:36.875] [info][status][plugin:markdown_vis] Status changed from uninitialized to green - Ready
  log   [18:18:36.883] [info][status][plugin:metric_vis] Status changed from uninitialized to green - Ready
  log   [18:18:36.907] [info][status][plugin:spyModes] Status changed from uninitialized to green - Ready
  log   [18:18:36.936] [info][status][plugin:statusPage] Status changed from uninitialized to green - Ready
  log   [18:18:36.950] [info][status][plugin:table_vis] Status changed from uninitialized to green - Ready
  log   [18:18:37.078] [info][listening] Server running at http://0.0.0.0:5601
  log   [18:18:37.446] [info][status][plugin:elasticsearch] Status changed from yellow to green - Kibana index ready

Kibana UI錯誤:

Error: Please specify a default index pattern
KbnError@http://localhost:5601/bundles/commons.bundle.js:58172:21
NoDefaultIndexPattern@http://localhost:5601/bundles/commons.bundle.js:58325:6
loadDefaultIndexPattern/<@http://localhost:5601/bundles/kibana.bundle.js:97911:1
processQueue@http://localhost:5601/bundles/commons.bundle.js:42358:29
scheduleProcessQueue/<@http://localhost:5601/bundles/commons.bundle.js:42374:28
$RootScopeProvider/this.$get</Scope.prototype.$eval@http://localhost:5601/bundles/commons.bundle.js:43602:17
$RootScopeProvider/this.$get</Scope.prototype.$digest@http://localhost:5601/bundles/commons.bundle.js:43413:16
$RootScopeProvider/this.$get</Scope.prototype.$apply@http://localhost:5601/bundles/commons.bundle.js:43710:14
$LocationProvider/this.$get</<@http://localhost:5601/bundles/commons.bundle.js:39839:14
jQuery.event.dispatch@http://localhost:5601/bundles/commons.bundle.js:22720:16
jQuery.event.add/elemData.handle@http://localhost:5601/bundles/commons.bundle.js:22407:7

Logstash調試日志:

  {:timestamp=>"2016-01-17T11:07:06.287000-0500", :message=>"Reading config file", :config_file=>"/u02/app/logstash-2.1.1/first-pipeline.conf", :level=>:debug, :file=>"logstash/agent.rb", :line=>"325", :method=>"local_config"}
{:timestamp=>"2016-01-17T11:07:06.420000-0500", :message=>"Compiled pipeline code:\n        @inputs = []\n        @filters = []\n        @outputs = []\n        @periodic_flushers = []\n        @shutdown_flushers = []\n\n          @input_file_1 = plugin(\"input\", \"file\", LogStash::Util.hash_merge_many({ \"path\" => (\"/u02/app/logstash-tutorial-dataset.log\") }, { \"start_position\" => (\"beginning\") }))\n\n          @inputs << @input_file_1\n\n          @filter_grok_2 = plugin(\"filter\", \"grok\", LogStash::Util.hash_merge_many({ \"match\" => {(\"message\") => (\"%{COMBINEDAPACHELOG}\")} }))\n\n          @filters << @filter_grok_2\n\n            @filter_grok_2_flush = lambda do |options, &block|\n              @logger.debug? && @logger.debug(\"Flushing\", :plugin => @filter_grok_2)\n\n              events = @filter_grok_2.flush(options)\n\n              return if events.nil? || events.empty?\n\n              @logger.debug? && @logger.debug(\"Flushing\", :plugin => @filter_grok_2, :events => events)\n\n                          events = @filter_geoip_3.multi_filter(events)\n  \n\n\n              events.each{|e| block.call(e)}\n            end\n\n            if @filter_grok_2.respond_to?(:flush)\n              @periodic_flushers << @filter_grok_2_flush if @filter_grok_2.periodic_flush\n              @shutdown_flushers << @filter_grok_2_flush\n            end\n\n          @filter_geoip_3 = plugin(\"filter\", \"geoip\", LogStash::Util.hash_merge_many({ \"source\" => (\"clientip\") }))\n\n          @filters << @filter_geoip_3\n\n            @filter_geoip_3_flush = lambda do |options, &block|\n              @logger.debug? && @logger.debug(\"Flushing\", :plugin => @filter_geoip_3)\n\n              events = @filter_geoip_3.flush(options)\n\n              return if events.nil? || events.empty?\n\n              @logger.debug? && @logger.debug(\"Flushing\", :plugin => @filter_geoip_3, :events => events)\n\n                \n\n              events.each{|e| block.call(e)}\n            end\n\n            if @filter_geoip_3.respond_to?(:flush)\n              @periodic_flushers << @filter_geoip_3_flush if @filter_geoip_3.periodic_flush\n              @shutdown_flushers << @filter_geoip_3_flush\n            end\n\n          @output_elasticsearch_4 = plugin(\"output\", \"elasticsearch\", LogStash::Util.hash_merge_many({ \"hosts\" => [(\"localhost:9200\")] }))\n\n          @outputs << @output_elasticsearch_4\n\n          @output_stdout_5 = plugin(\"output\", \"stdout\", LogStash::Util.hash_merge_many({ \"codec\" => (\"rubydebug\") }))\n\n          @outputs << @output_stdout_5\n\n  def filter_func(event)\n    events = [event]\n    @logger.debug? && @logger.debug(\"filter received\", :event => event.to_hash)\n              events = @filter_grok_2.multi_filter(events)\n              events = @filter_geoip_3.multi_filter(events)\n    \n    events\n  end\n  def output_func(event)\n    @logger.debug? && @logger.debug(\"output received\", :event => event.to_hash)\n    @output_elasticsearch_4.handle(event)\n    @output_stdout_5.handle(event)\n    \n  end", :level=>:debug, :file=>"logstash/pipeline.rb", :line=>"38", :method=>"initialize"}
{:timestamp=>"2016-01-17T11:07:06.426000-0500", :message=>"Plugin not defined in namespace, checking for plugin file", :type=>"input", :name=>"file", :path=>"logstash/inputs/file", :level=>:debug, :file=>"logstash/plugin.rb", :line=>"76", :method=>"lookup"}
{:timestamp=>"2016-01-17T11:07:06.451000-0500", :message=>"Plugin not defined in namespace, checking for plugin file", :type=>"codec", :name=>"plain", :path=>"logstash/codecs/plain", :level=>:debug, :file=>"logstash/plugin.rb", :line=>"76", :method=>"lookup"}
{:timestamp=>"2016-01-17T11:07:06.465000-0500", :message=>"config LogStash::Codecs::Plain/@charset = \"UTF-8\"", :level=>:debug, :file=>"logstash/config/mixin.rb", :line=>"122", :method=>"config_init"}
{:timestamp=>"2016-01-17T11:07:06.468000-0500", :message=>"config LogStash::Inputs::File/@path = [\"/u02/app/logstash-tutorial-dataset.log\"]", :level=>:debug, :file=>"logstash/config/mixin.rb", :line=>"122", :method=>"config_init"}
{:timestamp=>"2016-01-17T11:07:06.469000-0500", :message=>"config LogStash::Inputs::File/@start_position = \"beginning\"", :level=>:debug, :file=>"logstash/config/mixin.rb", :line=>"122", :method=>"config_init"}
{:timestamp=>"2016-01-17T11:07:06.472000-0500", :message=>"config LogStash::Inputs::File/@codec = <LogStash::Codecs::Plain charset=>\"UTF-8\">", :level=>:debug, :file=>"logstash/config/mixin.rb", :line=>"122", :method=>"config_init"}
{:timestamp=>"2016-01-17T11:07:06.480000-0500", :message=>"config LogStash::Inputs::File/@add_field = {}", :level=>:debug, :file=>"logstash/config/mixin.rb", :line=>"122", :method=>"config_init"}
{:timestamp=>"2016-01-17T11:07:06.481000-0500", :message=>"config LogStash::Inputs::File/@stat_interval = 1", :level=>:debug, :file=>"logstash/config/mixin.rb", :line=>"122", :method=>"config_init"}
{:timestamp=>"2016-01-17T11:07:06.492000-0500", :message=>"config LogStash::Inputs::File/@discover_interval = 15", :level=>:debug, :file=>"logstash/config/mixin.rb", :line=>"122", :method=>"config_init"}
{:timestamp=>"2016-01-17T11:07:06.493000-0500", :message=>"config LogStash::Inputs::File/@sincedb_write_interval = 15", :level=>:debug, :file=>"logstash/config/mixin.rb", :line=>"122", :method=>"config_init"}
{:timestamp=>"2016-01-17T11:07:06.496000-0500", :message=>"config LogStash::Inputs::File/@delimiter = \"\\n\"", :level=>:debug, :file=>"logstash/config/mixin.rb", :line=>"122", :method=>"config_init"}
{:timestamp=>"2016-01-17T11:07:06.498000-0500", :message=>"Plugin not defined in namespace, checking for plugin file", :type=>"filter", :name=>"grok", :path=>"logstash/filters/grok", :level=>:debug, :file=>"logstash/plugin.rb", :line=>"76", :method=>"lookup"}
{:timestamp=>"2016-01-17T11:07:06.515000-0500", :message=>"config LogStash::Filters::Grok/@match = {\"message\"=>\"%{COMBINEDAPACHELOG}\"}", :level=>:debug, :file=>"logstash/config/mixin.rb", :line=>"122", :method=>"config_init"}
{:timestamp=>"2016-01-17T11:07:06.524000-0500", :message=>"config LogStash::Filters::Grok/@add_tag = []", :level=>:debug, :file=>"logstash/config/mixin.rb", :line=>"122", :method=>"config_init"}
{:timestamp=>"2016-01-17T11:07:06.532000-0500", :message=>"config LogStash::Filters::Grok/@remove_tag = []", :level=>:debug, :file=>"logstash/config/mixin.rb", :line=>"122", :method=>"config_init"}
{:timestamp=>"2016-01-17T11:07:06.535000-0500", :message=>"config LogStash::Filters::Grok/@add_field = {}", :level=>:debug, :file=>"logstash/config/mixin.rb", :line=>"122", :method=>"config_init"}
{:timestamp=>"2016-01-17T11:07:06.536000-0500", :message=>"config LogStash::Filters::Grok/@remove_field = []", :level=>:debug, :file=>"logstash/config/mixin.rb", :line=>"122", :method=>"config_init"}

Elasticsearch最近的日志:

    sirishg@sirishg-vm:/u02/app/elasticsearch-2.1.1/bin$ ./elasticsearch
[2016-01-17 11:00:23,467][INFO ][node                     ] [node-1] version[2.1.1], pid[3418], build[40e2c53/2015-12-15T13:05:55Z]
[2016-01-17 11:00:23,470][INFO ][node                     ] [node-1] initializing ...
[2016-01-17 11:00:23,698][INFO ][plugins                  ] [node-1] loaded [], sites []
[2016-01-17 11:00:23,853][INFO ][env                      ] [node-1] using [1] data paths, mounts [[/ (/dev/sda1)]], net usable_space [12.6gb], net total_space [45.1gb], spins? [possibly], types [ext4]
[2016-01-17 11:00:27,412][INFO ][node                     ] [node-1] initialized
[2016-01-17 11:00:27,412][INFO ][node                     ] [node-1] starting ...
[2016-01-17 11:00:27,605][INFO ][transport                ] [node-1] publish_address {localhost/127.0.0.1:9300}, bound_addresses {127.0.0.1:9300}
[2016-01-17 11:00:27,616][INFO ][discovery                ] [node-1] my-application/rd4S1ZOdQXOj3_g-N22NnQ
[2016-01-17 11:00:31,121][INFO ][cluster.service          ] [node-1] new_master {node-1}{rd4S1ZOdQXOj3_g-N22NnQ}{127.0.0.1}{localhost/127.0.0.1:9300}, reason: zen-disco-join(elected_as_master, [0] joins received)
[2016-01-17 11:00:31,259][INFO ][http                     ] [node-1] publish_address {localhost/127.0.0.1:9200}, bound_addresses {127.0.0.1:9200}
[2016-01-17 11:00:31,260][INFO ][node                     ] [node-1] started
[2016-01-17 11:00:31,830][INFO ][gateway                  ] [node-1] recovered [2] indices into cluster_state

您能夠使它正常工作嗎? 一些評論:

1)您有時會在“ 0.0.0.0”上運行kibana,這表明出現了問題,請檢查配置和與Elasticsearch的連接。

2)您將信息放入什么索引? logstash *?

3)如果其他所有操作均失敗,請更新至當前的2.3。*(Elasticsearch)和4.4。*(Kibana)。

4)為了使logstash實際捕獲並讀取文件(並將數據發送到Elasticsearch),您應該再次寫入文件(更改文件創建/修改時間戳)。 這部分並不總是很容易,因為logstash(文件輸入-輸入)實際上就像指向添加到文件的最后一行的指針一樣。

您可能現在已經使它工作了,所以也許我在吹牛,但是另一方面,這可能會對某人有所幫助。

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM