简体   繁体   English

卡夫卡中的RecordTooLargeException

[英]RecordTooLargeException in kafka

the following is kafka publishing code which is giving RecordTooLargeException exception.以下是给出RecordTooLargeException异常的 kafka 发布代码。

tried all possible solutions given in stackoverflow giving info about different properties like max.request.size etc. but nothing worked.尝试了 stackoverflow 中给出的所有可能的解决方案,提供有关 max.request.size 等不同属性的信息,但没有任何效果。 exact stack trace is确切的堆栈跟踪是

Caused by: org.springframework.kafka.KafkaException: Send failed; nested exception is org.apache.kafka.common.errors.RecordTooLargeException: The message is 1696090 bytes when serialized which is larger than 1048576, which is the value of the max.request.size configuration Caused by: org.springframework.kafka.KafkaException: Send failed; nested exception is org.apache.kafka.common.errors.RecordTooLargeException: The message is 1696090 bytes when serialized which is larger than 1048576, which is the value of the max.request.size configuration . Caused by: org.springframework.kafka.KafkaException: Send failed; nested exception is org.apache.kafka.common.errors.RecordTooLargeException: The message is 1696090 bytes when serialized which is larger than 1048576, which is the value of the max.request.size configuration

    @SuppressWarnings("unchecked")
    @Override
    public void run(String... args) throws Exception {

        JSONArray array = new JSONArray();

        for (int i = 0; i < 8000; i++) {
            JSONObject object = new JSONObject();
            object.put("no", 1);
            object.put("name", "Kella Vivek");
            object.put("salary", 1000);
            object.put("address", "2-143");
            object.put("city", "gpm");
            object.put("pin", 534316);
            object.put("dist", "west");
            object.put("state", "ap");
            object.put("username", "mff");
            object.put("password", "mff");
            array.add(object);
        }

        ObjectMapper mapper = new ObjectMapper();
        String string = mapper.writerWithDefaultPrettyPrinter().writeValueAsString(array);

        template.send("consume", string);

    }

This is not a spring problem.这不是 spring 问题。 You need to tweak a number of parameters in kafka producer to make this working.您需要调整 kafka producer 中的一些参数才能使其正常工作。

Now to answer your question I did the following to enable sending 100 mb messages.现在回答您的问题,我执行以下操作以启用发送 100 mb 消息。

Create the properties and set size for buffer.memory, message.max.bytes and max.request.size as per your requirements.根据您的要求为 buffer.memory、message.max.bytes 和 max.request.size 创建属性并设置大小。

Properties producerProperties = new Properties();
producerProperties.put("buffer.memory", 104857600);
producerProperties.put("message.max.bytes", 104857600);
producerProperties.put("max.request.size", 104857600);
producerProperties.put("bootstrap.servers", kafkaBootstrapServers);
producerProperties.put("acks", "all");
producerProperties.put("retries", 0);
producerProperties.put("batch.size", 16384);
producerProperties.put("linger.ms", 1);
producerProperties.put("key.serializer", "org.apache.kafka.common.serialization.StringSerializer");
producerProperties.put("value.serializer", "org.apache.kafka.common.serialization.StringSerializer");

Create Producer using the above properties:使用上述属性创建生产者:

KafkaProducer<String, String> producer = new KafkaProducer<>(producerProperties);

And now send.现在发送。

private static void sendKafkaMessage(String payload,
         KafkaProducer<String, String> producer,
         String topic)
{
    logger.info("Sending Kafka message: " + payload);
    producer.send(new ProducerRecord<>(topic, payload));
}

You also need to ensure that the target server supports huge messages too.您还需要确保目标服务器也支持大量消息。 I configured following in the server for supporting huge messages.我在服务器中配置了以下内容以支持大量消息。

auto.create.topics.enable=true
default.replication.factor=3
min.insync.replicas=2
num.io.threads=8
num.network.threads=5
num.partitions=1
num.replica.fetchers=2
replica.lag.time.max.ms=30000
socket.receive.buffer.bytes=102400
socket.request.max.bytes=104857600
socket.send.buffer.bytes=102400
unclean.leader.election.enable=true
zookeeper.session.timeout.ms=18000
replica.fetch.max.bytes=104857600
message.max.bytes=104857600

声明:本站的技术帖子网页,遵循CC BY-SA 4.0协议,如果您需要转载,请注明本站网址或者原文地址。任何问题请咨询:yoyou2525@163.com.

 
粤ICP备18138465号  © 2020-2024 STACKOOM.COM