简体   繁体   中英

Waiting for every to connect to a topic with new consumer group in kafka (node-rdkafka)

I'm building a websocket backend which would connect to a topic (with only one partition) and consume data from the earliest position and keep consuming for new data till the websocket connection is disconnected. At a single time more than one websocket connection can exist.

To ensure all data from begining is consumed, everytime a websocket connection is made, I'd make a new consumer group and subscribe to the topic

const Kafka = require('node-rdkafka')
const { v4: uuidv4 } = require('uuid')

const kafkaConfig = (uuid) => ({
  'group.id': `my-topic-${uuid}`,
  'metadata.broker.list': KAFKA_URL,
})
const topicName= 'test-topic'
const consumer = new Kafka.KafkaConsumer(kafkaConfig(uuidv4()), {
  'auto.offset.reset': 'earliest',
})

console.log('attempting to connect to topic')
consumer.connect({ topic: topicName, timeout: 300 }, (err) => {
   if (err) {
      console.log('error connecting consumer to topic', topicName)
      throw err
   }
   console.log(`consumer connected to topic ${topicName}`)
   consumer.subscribe([topicName])
   consumer.consume((_err, data) => {
       // send data to websocket 
   })
})

This seems to work fine as expected. However when I try to exceed the number of consumers/consumer groups to above 4, the consumer connection seems to be waiting indefinitely. In above snippet I'd see the log 'attempting to connect' but nothing after it.

I read Kafka document and it looks like there is no limit on number of consumer groups.

I'm running Kafka/Zookeper in a docker container on my localhost and I havent set any limits on topics.

My dockerfile

zookeeper:
  image: confluentinc/cp-zookeeper:latest
  environment:
    ZOOKEEPER_CLIENT_PORT: 2181
    ZOOKEEPER_TICK_TIME: 2000

kafka:
    image: confluentinc/cp-kafka:latest
    labels:
      - 'custom.project=faster-cms'
      - 'custom.service=kafka'
    depends_on:
      - zookeeper
    ports:
      - 9092:9092
    environment:
      KAFKA_BROKER_ID: 1
      KAFKA_ZOOKEEPER_CONNECT: zookeeper:2181
      KAFKA_LISTENER_SECURITY_PROTOCOL_MAP: PLAINTEXT:PLAINTEXT,PLAINTEXT_HOST:PLAINTEXT
      KAFKA_ADVERTISED_LISTENERS: PLAINTEXT://kafka:29092,PLAINTEXT_HOST://localhost:9092
      KAFKA_INTER_BROKER_LISTENER_NAME: PLAINTEXT
      KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR: 1
      KAFKA_LOG4J_ROOT_LOGLEVEL: INFO
      KAFKA_LOG4J_LOGGERS: 'kafka.controller=INFO,kafka.producer.async.DefaultEventHandler=INFO,state.change.logger=INFO'
      CONFLUENT_SUPPORT_METRICS_ENABLE: 'false'

My question is, why is the connection waiting indefinitely, how do I raise the consumers limit or throw an error when it gets stuck indefinetly.

Apparently this is a limitation in node-rdkafka package. The default cunsumer/producer groups limit is 5. If you want to increase the limit set env variable UV_THREADPOOL_SIZE in .env file and the package would increase the limit of groups.

The technical post webpages of this site follow the CC BY-SA 4.0 protocol. If you need to reprint, please indicate the site URL or the original address.Any question please contact:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM