简体   繁体   English

配置Kafka Connect分布式连接器日志(connectDistributed.out)

[英]Configure Kafka Connect distributed connector log (connectDistributed.out)

Currently two types of Kafka Connect log are being collected.目前正在收集两种类型的Kafka Connect日志。

  • connect-rest.log.2018-07-01-21 , connect-rest.log.2018-07-01-22 ... connect-rest.log.2018-07-01-21connect-rest.log.2018-07-01-22 ...
  • connectDistributed.out

The thing is that I don't know how to configure connectDistributed.out file in Kafka Connect.问题是我不知道如何在 Kafka Connect 中配置connectDistributed.out文件。 Following is the sample output of the file:以下是该文件的示例输出:

[2018-07-11 08:42:40,798] INFO WorkerSinkTask{id=elasticsearch-sink- 
connector-0} Committing offsets asynchronously using sequence number 
216: {test-1=OffsetAndMetadata{offset=476028, metadata=‘’}, 
test-0=OffsetAndMetadata{offset=478923, metadata=‘’}, 
test-2=OffsetAndMetadata{offset=477944, metadata=‘’}} 
(org.apache.kafka.connect.runtime.WorkerSinkTask:325)
[2018-07-11 08:43:40,798] INFO WorkerSinkTask{id=elasticsearch-sink-connector0} 
Committing offsets asynchronously using sequence number 217: 
{test-1=OffsetAndMetadata{offset=476404, metadata=‘’}, 
test-0=OffsetAndMetadata{offset=479241, metadata=‘’}, 
test-2=OffsetAndMetadata{offset=478316, metadata=‘’}} 
(org.apache.kafka.connect.runtime.WorkerSinkTask:325)

Not having configured any logging option, the file size is getting bigger as time goes by.没有配置任何日志记录选项,文件大小随着时间的推移变得越来越大。 Today, it reached 20GB and I had to manually empty the file.今天,它达到了 20GB,我不得不手动清空文件。 So my question is how do I configure this connectDistributed.out ?所以我的问题是如何配置这个connectDistributed.out I'm configuring log options for other components such as the kafka broker log.我正在为其他组件(例如 kafka 代理日志)配置日志选项。


Following is some of the Kafka Connect-related log configurations under confluent-4.1.0/etc/kafka that I'm using.以下是我正在使用的confluent-4.1.0/etc/kafka下的一些与 Kafka Connect 相关的日志配置。

log4j.properties log4j.properties

log4j.appender.requestAppender.File=${kafka.logs.dir}/kafka-request.log
log4j.appender.requestAppender.layout=org.apache.log4j.PatternLayout
log4j.appender.requestAppender.layout.ConversionPattern=[%d] %p %m (%c)%n

log4j.appender.cleanerAppender=org.apache.log4j.DailyRollingFileAppender
log4j.appender.cleanerAppender.DatePattern='.'yyyy-MM-dd-HH
log4j.appender.cleanerAppender.File=${kafka.logs.dir}/log-cleaner.log
log4j.appender.cleanerAppender.layout=org.apache.log4j.PatternLayout
log4j.appender.cleanerAppender.layout.ConversionPattern=[%d] %p %m (%c)%n

log4j.appender.controllerAppender=org.apache.log4j.DailyRollingFileAppender
log4j.appender.controllerAppender.DatePattern='.'yyyy-MM-dd-HH
log4j.appender.controllerAppender.File=${kafka.logs.dir}/controller.log
log4j.appender.controllerAppender.layout=org.apache.log4j.PatternLayout
log4j.appender.controllerAppender.layout.ConversionPattern=[%d] %p %m (%c)%n

log4j.appender.authorizerAppender=org.apache.log4j.DailyRollingFileAppender
log4j.appender.authorizerAppender.DatePattern='.'yyyy-MM-dd-HH
log4j.appender.authorizerAppender.File=${kafka.logs.dir}/kafka-authorizer.log
log4j.appender.authorizerAppender.layout=org.apache.log4j.PatternLayout
log4j.appender.authorizerAppender.layout.ConversionPattern=[%d] %p %m (%c)%n

# Change the two lines below to adjust ZK client logging
log4j.logger.org.I0Itec.zkclient.ZkClient=INFO
log4j.logger.org.apache.zookeeper=INFO

# Change the two lines below to adjust the general broker logging level (output to server.log and stdout)
log4j.logger.kafka=INFO
log4j.logger.org.apache.kafka=INFO

# Change to DEBUG or TRACE to enable request logging
log4j.logger.kafka.request.logger=WARN, requestAppender
log4j.additivity.kafka.request.logger=false

# Uncomment the lines below and change log4j.logger.kafka.network.RequestChannel$ to TRACE for additional output
# related to the handling of requests
#log4j.logger.kafka.network.Processor=TRACE, requestAppender
#log4j.logger.kafka.server.KafkaApis=TRACE, requestAppender
#log4j.additivity.kafka.server.KafkaApis=false
log4j.logger.kafka.network.RequestChannel$=WARN, requestAppender
log4j.additivity.kafka.network.RequestChannel$=false

log4j.logger.kafka.controller=TRACE, controllerAppender
log4j.additivity.kafka.controller=false

log4j.logger.kafka.log.LogCleaner=INFO, cleanerAppender
log4j.additivity.kafka.log.LogCleaner=false

log4j.logger.state.change.logger=TRACE, stateChangeAppender
log4j.additivity.state.change.logger=false

# Access denials are logged at INFO level, change to DEBUG to also log allowed accesses
log4j.logger.kafka.authorizer.logger=INFO, authorizerAppender
log4j.additivity.kafka.authorizer.logger=false

connect-log4j.properties连接-log4j.properties

log4j.rootLogger=INFO, stdout

log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=[%d] %p %m (%c:%L)%n

log4j.logger.org.apache.zookeeper=ERROR
log4j.logger.org.I0Itec.zkclient=ERROR
log4j.logger.org.reflections=ERROR


log4j.appender.kafkaConnectRestAppender=org.apache.log4j.DailyRollingFileAppender
log4j.appender.kafkaConnectRestAppender.DatePattern='.'yyyy-MM-dd-HH
log4j.appender.kafkaConnectRestAppender.File=/home/ec2-user/logs/connect-rest.log
log4j.appender.kafkaConnectRestAppender.layout=org.apache.log4j.PatternLayout
log4j.appender.kafkaConnectRestAppender.layout.ConversionPattern=[%d] %p %m (%c)%n

log4j.logger.org.apache.kafka.connect.runtime.rest=INFO, kafkaConnectRestAppender
log4j.additivity.org.apache.kafka.connect.runtime.rest=false

The connectDistributed.out file only exists if you use daemon mode, eg connectDistributed.out文件仅在您使用守护程序模式时才存在,例如

connect-distributed -daemon connect-distributed.properties

Reason : From kafka-run-class script, CONSOLE_OUTPUT_FILE is set to connectDistributed.out原因:从kafka-run-class脚本, CONSOLE_OUTPUT_FILE设置为connectDistributed.out

# Launch mode
if [ "x$DAEMON_MODE" = "xtrue" ]; then
    nohup $JAVA $KAFKA_HEAP_OPTS $KAFKA_JVM_PERFORMANCE_OPTS $KAFKA_GC_LOG_OPTS $KAFKA_JMX_OPTS $KAFKA_LOG4J_OPTS -cp $CLASSPATH $KAFKA_OPTS "$@" > "$CONSOLE_OUTPUT_FILE" 2>&1 < /dev/null &
else... 

Option 1: Load a custom log4j property file选项 1:加载自定义 log4j 属性文件

You can update the KAFKA_LOG4J_OPTS environment variable to point at any log4j property file that you want to before starting connect (see example below)在开始连接之前,您可以更新KAFKA_LOG4J_OPTS环境变量以指向您想要的任何 log4j 属性文件(请参见下面的示例)

$ export KAFKA_LOG4J_OPTS="-Dlog4j.configuration=file://path/to/connect-log4j-new.properties"
$ connect-distributed connect-distributed.properties

Notice: Not using -daemon here注意:这里不使用-daemon

If you no longer have ConsoleAppender in the log4j properties, this will output next to nothing, and just hang, so would be good to nohup it.如果你不再有ConsoleAppender在log4j属性,这将输出旁边没有,只是挂,那么将是一件好事nohup它。


The default log4j config is named connect-log4j.properties , and in Confluent Platform, this is at etc/kafka/ folder.默认的 log4j 配置名为connect-log4j.properties ,在 Confluent Platform 中,它位于etc/kafka/文件夹中。 This is what it looks like by default这是默认情况下的样子

log4j.rootLogger=INFO, stdout

log4j.appender.stdout=org.apache.log4j.ConsoleAppender

In order to set a max log file size, you'll need change the root logger to go to a FileAppender rather than ConsoleAppender, but I prefer using a DailyRollingFileAppender .为了设置最大日志文件大小,您需要更改根记录器以转到 FileAppender 而不是 ConsoleAppender,但我更喜欢使用DailyRollingFileAppender

Here is an example 这是一个例子

log4j.rootLogger=INFO, stdout, FILE

log4j.appender.FILE=org.apache.log4j.DailyRollingFileAppender
log4j.appender.FILE.DatePattern='.'yyyy-MM-dd
log4j.appender.FILE.File=/var/log/kafka-connect/connect.log
log4j.appender.FILE.layout=org.apache.log4j.PatternLayout
log4j.appender.FILE.layout.ConversionPattern=[%d] %p %m (%c)%n

log4j.appender.stdout=org.apache.log4j.ConsoleAppender
log4j.appender.stdout.layout=org.apache.log4j.PatternLayout
log4j.appender.stdout.layout.ConversionPattern=[%d] %p %m (%c:%L)%n

log4j.logger.org.apache.zookeeper=ERROR
log4j.logger.org.I0Itec.zkclient=ERROR
log4j.logger.org.reflections=ERROR

I don't have enough reputation to comment (but I can answer...) so I just want to point out in response to cricket's answer I would NOT use DailyRollingFileAppender.我没有足够的声誉来发表评论(但我可以回答......)所以我只想指出,为了回应板球的回答,我不会使用 DailyRollingFileAppender。 In the docs themselves it says it is unadvised due to synchronization and data loss issues.在文档本身中,它表示由于同步和数据丢失问题,这是不建议的。 Instead I would use RollingFileAppender in conjunction with TimeBasedRollingPolicy.相反,我会将 RollingFileAppender 与 TimeBasedRollingPolicy 结合使用。 I noticed this after some odd behavior with DailyRollingFileAppender in Kafka Connect.在 Kafka Connect 中使用 DailyRollingFileAppender 出现一些奇怪的行为后,我注意到了这一点。 You will need to include the log4j "extras" jar in the classpath to make this work.您需要在类路径中包含 log4j “extras” jar 才能完成这项工作。

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM