[英]Kafka subscriber client 0.10.2 - Spring Integration - INVALID_TOPIC_EXCEPTION
I'm trying to subscribe Kafka server using Spring integration. 我正在尝试使用Spring集成订阅Kafka服务器。 I'm able to see kafka connect log in application log.
我可以在应用程序日志中看到kafka connect日志。 Given the below the same.
鉴于以下相同。
Please let me know if i'm missing any necessary configuration here. 如果我在这里缺少任何必要的配置,请告诉我。
16:56:02.959 [main] INFO org.apache.kafka.clients.consumer.ConsumerConfig
ConsumerConfig values:
auto.commit.interval.ms = 100
auto.offset.reset = latest
bootstrap.servers = [10.176.138.40:9092, 10.176.138.208:9092,
10.176.138.169:9092, 10.176.138.184:9092, 10.176.138.188:9092]
check.crcs = true
client.id =
connections.max.idle.ms = 540000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = amqunpnotifyd-dev-thilak
heartbeat.interval.ms = 1000
interceptor.classes = null
key.deserializer = class
org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class
org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.ms = 5000
request.timeout.ms = 305000
retry.backoff.ms = 100
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
send.buffer.bytes = 131072
session.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class org.apache.kafka.common.serialization.StringDeserializer
16:56:02.967 [main] INFO org.apache.kafka.clients.consumer.ConsumerConfig ConsumerConfig values:
auto.commit.interval.ms = 100
auto.offset.reset = latest
bootstrap.servers = [10.176.138.40:9092, 10.176.138.208:9092,
10.176.138.169:9092, 10.176.138.184:9092, 10.176.138.188:9092]
check.crcs = true
client.id = consumer-1
connections.max.idle.ms = 540000
enable.auto.commit = true
exclude.internal.topics = true
fetch.max.bytes = 52428800
fetch.max.wait.ms = 500
fetch.min.bytes = 1
group.id = amqunpnotifyd-dev-thilak
heartbeat.interval.ms = 1000
interceptor.classes = null
key.deserializer = class
org.apache.kafka.common.serialization.StringDeserializer
max.partition.fetch.bytes = 1048576
max.poll.interval.ms = 300000
max.poll.records = 500
metadata.max.age.ms = 300000
metric.reporters = []
metrics.num.samples = 2
metrics.recording.level = INFO
metrics.sample.window.ms = 30000
partition.assignment.strategy = [class
org.apache.kafka.clients.consumer.RangeAssignor]
receive.buffer.bytes = 65536
reconnect.backoff.ms = 5000
request.timeout.ms = 305000
retry.backoff.ms = 100
sasl.jaas.config = null
sasl.kerberos.kinit.cmd = /usr/bin/kinit
sasl.kerberos.min.time.before.relogin = 60000
sasl.kerberos.service.name = null
sasl.kerberos.ticket.renew.jitter = 0.05
sasl.kerberos.ticket.renew.window.factor = 0.8
sasl.mechanism = GSSAPI
security.protocol = PLAINTEXT
send.buffer.bytes = 131072
session.timeout.ms = 10000
ssl.cipher.suites = null
ssl.enabled.protocols = [TLSv1.2, TLSv1.1, TLSv1]
ssl.endpoint.identification.algorithm = null
ssl.key.password = null
ssl.keymanager.algorithm = SunX509
ssl.keystore.location = null
ssl.keystore.password = null
ssl.keystore.type = JKS
ssl.protocol = TLS
ssl.provider = null
ssl.secure.random.implementation = null
ssl.trustmanager.algorithm = PKIX
ssl.truststore.location = null
ssl.truststore.password = null
ssl.truststore.type = JKS
value.deserializer = class
org.apache.kafka.common.serialization.StringDeserializer
I'm getting following Warn in the logger even though topic exists. 即使存在主题,我也会在记录器中关注“警告”。
WARN org.apache.kafka.clients.NetworkClient Error while fetching metadata with correlation id 3 : {"notification.email.click"=INVALID_TOPIC_EXCEPTION}
Due to which topic is not asssigned to my consumer group. 由于哪个主题未分配给我的消费者组。
As the documentation say this particular exception occurs when "The client has attempted to perform an operation on an invalid topic." 如文档所述,当“客户端尝试对无效主题执行操作”时,将发生此特定异常。
This generally happens when the client is not able to publish due to a host of reasons 当客户端由于多种原因而无法发布时,通常会发生这种情况
Since in your case the your configuration is not know. 因为在您的情况下,您的配置是未知的。 You can start by debugging the above reasons.
您可以通过调试上述原因开始。
Your topic name is "notification.email.click"
(including quotes). 您的主题名称是
"notification.email.click"
(包括引号)。 Double quote ( "
) is not a legal character for a topic name. 双引号(
"
)不是主题名称的合法字符。
According to Kafka Docs, legal characters for topic naming include: 根据Kafka Docs的说法,主题命名的合法字符包括:
.
_
-
The source code for 0.10.2
defines the following legal characters: 0.10.2
的源代码定义以下合法字符:
val legalChars = "[a-zA-Z0-9\\._\\-]"
private val maxNameLength = 249
private val rgx = new Regex(legalChars + "+")
声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.