简体   繁体   English

Logstash不在Elasticsearch上创建索引

[英]Logstash not creating index on Elasticsearch

I'm trying to setup a ELK stack on EC2, Ubuntu 14.04 instance. 我正在尝试在EC2,Ubuntu 14.04实例上设置ELK堆栈。 But everything install, and everything is working just fine, except for one thing. 但是一切都安装好了,除了一件事,一切都很好。

Logstash is not creating an index on Elasticsearch. Logstash没有在Elasticsearch上创建索引。 Whenever I try to access Kibana, it wants me to choose an index, from Elasticsearch. 每当我尝试访问Kibana时,它都希望我从Elasticsearch中选择一个索引。

Logstash is in the ES node, but the index is missing. Logstash位于ES节点中,但缺少索引。 Here's the message I get: 这是我得到的信息:

"Unable to fetch mapping. Do you have indices matching the pattern?"

Am I missing something out? 我错过了什么吗? I followed this tutorial: Digital Ocean 我按照本教程: 数字海洋

EDIT: Here's the screenshot of the error I'm facing: 编辑:这是我面临的错误的屏幕截图: Logstash缺少ES中的指数(Kibana4) Yet another screenshot: 另一个截图:

I got identical results on Amazon AMI (Centos/RHEL clone) 我在Amazon AMI上获得了相同的结果(Centos / RHEL克隆)

In fact exactly as per above… Until I injected some data into Elastic - this creates the first day index - then Kibana starts working. 实际上完全如上所述...... 直到我将一些数据注入Elastic - 这创建了第一day索引 - 然后Kibana开始工作。 My simple .conf is: 我的简单.conf是:

input {
  stdin {
      type => "syslog"
    }
}
output {
  stdout {codec => rubydebug }
   elasticsearch {
          host => "localhost"
          port => 9200
          protocol => http
       }
}

then 然后

cat /var/log/messages | logstash -f your.conf

Why stdin you ask? 为什么stdin你问? Well it's not super clear anywhere (also a new Logstash user - found this very unclear) that Logstash will never terminate (eg when using the file plugin) - it's designed to keep watching. 好吧,任何地方(也是一个新的Logstash用户 - 发现这个非常不清楚)Logstash永远不会终止(例如使用file插件时)并不是非常清楚 - 它旨在继续观看。

But using stdin - Logstash will run - send data to Elastic (which creates index) then go away. 但是使用stdin - Logstash将运行 - 将数据发送到Elastic(创建索引)然后消失。

If I did the same thing above with the file input plugin, it would never create the index - I don't know why this is. 如果我使用file输入插件执行相同的file ,它将永远不会创建索引 - 我不知道为什么会这样。

I finally managed to identify the issue. 我终于找到了解决问题的方法。 For some reason, the port 5000 is being accessed by another service, which is not allowing us to accept any incoming connection. 由于某种原因,另一个服务正在访问端口5000,这不允许我们接受任何传入连接。 So all your have to do is to edit the logstash.conf file, and change the port from 5000 to 5001 or anything of your convenience. 因此,您所要做的就是编辑logstash.conf文件,并将端口从5000更改为5001或您方便的任何内容。

Make sure all of your logstash-forwarders are sending the logs to the new port, and you should be good to go. 确保所有logstash-forwarder都将日志发送到新端口,您应该好好去。 If you have generated the logstash-forwarder.crt using the FQDN method, then the logstash-forwarder should be pointing to the same FQDN and not an IP. 如果使用FQDN方法生成了logstash-forwarder.crt ,则logstash-forwarder应指向相同的FQDN而不是IP。

Is this Kibana3 or 4? 这是Kibana3还是4?

If it's Kibana4, can you click on settings in the top-line menu, choose indices and then make sure that the index name contains 'logstash-*', then click in the 'time-field' name and choose '@timestamp' 如果是Kibana4,您可以单击顶行菜单中的设置,选择索引,然后确保索引名称包含'logstash- *',然后单击'time-field'名称并选择'@timestamp'

I've added a screenshot of my settings below, be careful which options you tick. 我在下面添加了我的设置的屏幕截图,请注意您勾选的选项。

logstash设置

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM