簡體   English   中英

卡夫卡原木長得太高

[英]kafka logs grows too high

我可以看到kafka logs正在迅速增長並淹沒文件系統。

我如何更改 kafka 的設置以減少寫入日志並經常輪換這些日志。

文件的位置是 - /opt/kafka/kafka_2.12-2.2.2/logs及其大小 -

5.9G    server.log.2020-11-24-14
5.9G    server.log.2020-11-24-15
5.9G    server.log.2020-11-24-16
5.7G    server.log.2020-11-24-17

來自上述文件的示例日志。

[2020-11-24 14:59:59,999] WARN Exception when following the leader (org.apache.zookeeper.server.quorum.Learner)
java.io.IOException: No space left on device
        at java.io.FileOutputStream.writeBytes(Native Method)
        at java.io.FileOutputStream.write(FileOutputStream.java:326)
        at org.apache.zookeeper.common.AtomicFileOutputStream.write(AtomicFileOutputStream.java:74)
        at sun.nio.cs.StreamEncoder.writeBytes(StreamEncoder.java:221)
        at sun.nio.cs.StreamEncoder.implFlushBuffer(StreamEncoder.java:291)
        at sun.nio.cs.StreamEncoder.implFlush(StreamEncoder.java:295)
        at sun.nio.cs.StreamEncoder.flush(StreamEncoder.java:141)
        at java.io.OutputStreamWriter.flush(OutputStreamWriter.java:229)
        at java.io.BufferedWriter.flush(BufferedWriter.java:254)
        at org.apache.zookeeper.server.quorum.QuorumPeer.writeLongToFile(QuorumPeer.java:1391)
        at org.apache.zookeeper.server.quorum.QuorumPeer.setCurrentEpoch(QuorumPeer.java:1426)
        at org.apache.zookeeper.server.quorum.Learner.syncWithLeader(Learner.java:454)
        at org.apache.zookeeper.server.quorum.Follower.followLeader(Follower.java:83)
        at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:981)
[2020-11-24 14:59:59,999] INFO shutdown called (org.apache.zookeeper.server.quorum.Learner)
java.lang.Exception: shutdown Follower
        at org.apache.zookeeper.server.quorum.Follower.shutdown(Follower.java:169)
        at org.apache.zookeeper.server.quorum.QuorumPeer.run(QuorumPeer.java:985)
[2020-11-24 14:59:59,999] INFO Shutting down (org.apache.zookeeper.server.quorum.FollowerZooKeeperServer)
[2020-11-24 14:59:59,999] INFO LOOKING (org.apache.zookeeper.server.quorum.QuorumPeer)
[2020-11-24 14:59:59,999] INFO New election. My id =  1, proposed zxid=0x1000001d2 (org.apache.zookeeper.server.quorum.FastLeaderElection)
[2020-11-24 14:59:59,999] INFO Notification: 1 (message format version), 1 (n.leader), 0x1000001d2 (n.zxid), 0x2 (n.round), LOOKING (n.state), 1 (n.sid), 0x1 (n.peerEpoch) LOOKING (my state) (org.apache.zookeeper.server.quorum.FastLeaderElection)

它還寫入/opt/kafka/kafka_2.12-2.2.2/kafka.log

[2020-12-05 16:51:10,109] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-30 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2020-12-05 16:51:10,109] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-36 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2020-12-05 16:51:10,109] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-42 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2020-12-05 16:51:10,110] INFO [GroupMetadataManager brokerId=1] Finished loading offsets and group metadata from __consumer_offsets-48 in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2020-12-05 17:01:09,528] INFO [GroupMetadataManager brokerId=1] Removed 0 expired offsets in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)
[2020-12-05 17:11:09,528] INFO [GroupMetadataManager brokerId=1] Removed 0 expired offsets in 0 milliseconds. (kafka.coordinator.group.GroupMetadataManager)

kafka 用於彈性堆棧。

下面是server.properties文件中的條目。

# A comma seperated list of directories under which to store log files
log.dirs=/var/log/kafka

它的日志文件為

/var/log/kafka
drwxr-xr-x 2 kafka users 4.0K Dec  5 16:51 heartbeat-1
drwxr-xr-x 2 kafka users 4.0K Dec  5 16:51 __consumer_offsets-12
drwxr-xr-x 2 kafka users 4.0K Dec  5 16:51 auditbeat-0
drwxr-xr-x 2 kafka users 4.0K Dec  5 16:51 apm-2
drwxr-xr-x 2 kafka users 4.0K Dec  5 16:51 __consumer_offsets-28
drwxr-xr-x 2 kafka users 4.0K Dec  5 16:51 filebeat-2
drwxr-xr-x 2 kafka users 4.0K Dec  5 16:51 __consumer_offsets-38
drwxr-xr-x 2 kafka users 4.0K Dec  5 16:51 __consumer_offsets-44
drwxr-xr-x 2 kafka users 4.0K Dec  5 16:51 __consumer_offsets-6
drwxr-xr-x 2 kafka users 4.0K Dec  5 16:51 __consumer_offsets-16
drwxr-xr-x 2 kafka users 4.0K Dec  5 16:51 metricbeat-0
drwxr-xr-x 2 kafka users 4.0K Dec  5 16:51 __consumer_offsets-22
drwxr-xr-x 2 kafka users 4.0K Dec  5 16:51 __consumer_offsets-32
-rw-r--r-- 1 kafka users  747 Dec  5 18:02 recovery-point-offset-checkpoint
-rw-r--r-- 1 kafka users    4 Dec  5 18:02 log-start-offset-checkpoint
-rw-r--r-- 1 kafka users  749 Dec  5 18:03 replication-offset-checkpoint

/opt/kafka/kafka_2.12-2.2.2/config路徑下的文件中沒有啟用DEBUG級別的日志。

我如何確保它不會在/opt/kafka/kafka_2.12-2.2.2/logs中生成如此龐大的文件,以及如何通過壓縮定期旋轉它們。

謝謝,

log.dirs是實際的代理存儲,而不是進程日志,因此不應與其他進程日志一起在/var/log

一天差不多 6G 也不是不合理,但是可以修改 log4j.properties 文件,只保留滾動文件 appender 的 1 到 2 天左右

通常,作為任何 Linux 管理任務,您將有單獨的磁盤卷用於/var/log 、您的操作系統存儲,以及用於服務器數據的任何專用磁盤 - 比如說在/kafka上的掛載

暫無
暫無

聲明:本站的技術帖子網頁,遵循CC BY-SA 4.0協議,如果您需要轉載,請注明本站網址或者原文地址。任何問題請咨詢:yoyou2525@163.com.

 
粵ICP備18138465號  © 2020-2024 STACKOOM.COM