[英]data loss while sending from fluentd to aws kinesis firehose
我們正在使用fluentd將日志發送到aws kinesis firehose 。 我們可以看到很少有記錄不時發送到 aws kinesis firehose。 這是我們在 fluentd 中的設置。
<system>
log_level info
</system>
<source>
@type tail
path "/var/log/app/tracy.log*"
pos_file "/var/tmp/tracy.log.pos"
pos_file_compaction_interval 72h
@log_level "error"
tag "tracylog"
<parse>
@type "json"
time_key False
</parse>
</source>
<source>
@type monitor_agent
bind 127.0.0.1
port 24220
</source>
<match tracylog>
@type "kinesis_firehose"
region "${awsRegion}"
delivery_stream_name "${delivery_stream_name}"
<instance_profile_credentials>
</instance_profile_credentials>
<buffer>
# Frequency of ingestion
flush_interval 30s
flush_thread_count 4
chunk_limit_size 1m
</buffer>
</match>
配置中的一些更改解決了我的問題:
<system>
log_level info
</system>
<source>
@type tail
path "/var/log/app/tracy.log*"
pos_file "/var/tmp/tracy.log.pos"
pos_file_compaction_interval 72h
read_from_head true
follow_inodes true
@log_level "error"
tag "tracylog"
<parse>
@type "json"
time_key False
</parse>
</source>
<source>
@type monitor_agent
bind 127.0.0.1
port 24220
</source>
<match tracylog>
@type "kinesis_firehose"
region "${awsRegion}"
delivery_stream_name "${delivery_stream_name}"
<instance_profile_credentials>
</instance_profile_credentials>
<buffer>
flush_interval 2
flush_thread_interval 0.1
flush_thread_burst_interval 0.01
flush_thread_count 8
</buffer>
聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.