简体   繁体   中英

How to read logs directly from elasticsearch without using Kibana

I have a ASP.NET Core Web API written in C# with docker-compose, elasticsearch, and serilog and Kibana. I plan on removing the Kibana from the docker-compose.yml file. Once the Serilog generates the log files and after configuring a sink to Elasticsearch so it can write the logs where elasticsearch can read it. How do I go about reading those logs that is now in elasticsearch without having to go to Kibana to view the logs and read them?

Is there any recommendations on a documentation and/or a package for this or is this something that needs to be programmed from scratch?


Suggestion attempt:

I went to Download Kafka then I went to powershell as an admin and did a wget (url) . After it downloaded, I ran tar -xzf kafka_2.13-2.8.0.tgz & cd kafka_2.13-2.8.0 . I then followed what you advised to Activate Zookeeper broker and Kafka and then creating the topic. However when for each step you said to do, nothing happened. I would try to activate zookeeper it would tell me how do I want to open the file, so I would just hit ESC and then ran the other commands but same thing would come up. Should this be doing that?

在此处输入图像描述

Make use of log4net as log provide and it's Kafka_Appender .This appender will produce your operation logs in every level to topic and it consumers then Logstash will ingest this logs to your elastic index as output.

There are many privileges in this roadmap, You have supper powerful stream processor like Apache Kafka and its queue based messaging help you to always trace every logs that it produce an other one is Logstash which you can even add more stream processor and filter like grok and have multiplr outputs and even storing you logs as Csv or file system.

First activate Zookeeper and Kafka broker and create a consumer with some topic name in bin directory of downloaded Kafka file:

Activating Zookeeper broker

./zookeeper-server-start.sh ../config/zookeeper.properties

Activating Kafka broker

./kafka-server-start.sh ../config/server.properties

Create Topic

./kafka-topics.sh --create --topic test-topic -zookeeper localhost:2181 --replication-factor 1 --partitions 4

Active consumer of the created topic

./kafka-console-producer.sh --broker-list localhost:9092 --topic test-topic

Then add the log appender for created topic for consuming logs(This one is up to you) and after that create a Logstash pipeline such as below configuration

 input { kafka{ group_id => "35834" topics => ["yourtopicname"] bootstrap_servers => "localhost:9092" codec => json } } filter { } output { file { path => "C:\somedirectory" } elasticsearch { hosts => ["localhost:9200"] document_type => "_doc" index => "yourindexname" } stdout { codec => rubydebug } }

And run it with popular command in the bin directory of logstash

./logstash -f yourconfigurationfile.conf

Please note that to create an index before activating Logstash more over you do not need to design mapping for your output index as soon as first document inserted elastic will creates a mapping for all relevant fields in your index.

You can use one of the two official clients for elasticsearch using .NET

There is a low level and high level client, you can read more about the difference and how to use each one in the official documentation .

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM